What is it about "infinite scroll" that makes it so dangerous? Is having people click on a "load more" button less "addictive?" Is a "load more" button more harmful than clicking on page numbers? Should a feed cut off after a certain point, even if users would like to see older content? I don't understand what's so evil about "infinite" scrolling.
Also, was Meta really monetizing content involving child exploitation, like doing that especially? I find it hard to believe.
Ordinarily I'd say this wouldn't survive on appeal, but given the state of things all the way up to SCOTUS I'm no longer certain of anything. A sad day for advocates of an open Internet.
CAD files for gun parts, unlike currency, don't carry identifying markings like the EURion constellation. Identifying what is or isn't a "gun part" is an entirely different challenge and pretty much impossible to do reliably.
It's not just the code that runs the printer but also the content being printed. A document printer that by law stops you from printing "seditious" texts, for example, would constitute a kind of digitally-enforced prior restraint on speech. Even if the law were limited to stopping the printing of actual unprotected speech, there is no way to do this reliably in such a way that it doesn't affect protected speech.
Harmful and annoying, sure, but is it too much to hope in today's legal environment that it's also unconstitutional? To me it's like requiring that laser printers include filters against unprotected speech such as libel or incitement to violence. The false positives and pre-publication suppression alone would be enough to kill such a law as unconstitutional (I expect), so why should it not be the same for 3D printers?
I expect some will say it's because 3D printers are used to make functional items, but anyone who's 3D printed a Baby Yoda or flexi dragon knows printing purely or primarily expressive works is a major part of the 3D printing world.
OpenAI's CEO has said that saying "thank you" to ChatGPT costs them "tens of millions of dollars." There is also the cost of training, although in that case the cost per capita goes down as more people adopt AI. Like so many environmental problems, though, it's not so much about individual behavior as it is about the system as a whole.
"The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things."
And this is the part that worries me most about where AI is headed. I believe the ability to just do things without a human veto and with perhaps minor human involvement will prove a big problem.
People have lost the contents of their entire hard drives to agentic AI because of commands being run in a non-sandboxed environment. And the consequences could be even more serious: Critical vulnerabilities will be introduced into software in spite of human reviewers' best efforts. People will be denied for important things like jobs, loans, medical procedures because of agentic AI. Companies and organizations will send out false information to people, or forward confidential information to the wrong people because of agentic AI. And I'm sure there's a bunch of scenarios that haven't occurred to me.
With unprecedented power often comes unprecedented danger, and my experience with AI suggests it makes enough mistakes that are often subtle enough to be really dangerous (without exaggeration). I am genuinely worried.
And if the UK government gets its way you may even be required to show ID to disable filters built into your phone or computer's operating system:
https://arstechnica.com/tech-policy/2025/12/uk-to-encourage-apple-and-google-to-put-nudity-blocking-systems-on-phones/
The latest news in the digital ID front is that they want device manufacturers to obtain proof of age before allowing people uncensored access to their own phones.
https://forums.macrumors.com/threads/uk-wants-all-iphones-to-block-explicit-images-unless-you-prove-age.2474720/
Given all that's been happening lately at SCOTUS with questionable ruling after questionable ruling, it is quite depressing to see two of the only three liberal judges leaning the wrong way. It helps that copyright isn't a pet conservative issue, giving conservative justices more room to be objective, but Jackson and Sotomayor have shown better judgment than this. They should know better.
People keep saying even these shadow docket rulings establish binding precedent, but are courts required to follow the rationale laid out in Kavanaugh's sole concurrence? Presumably the case at issue still needs to be considered on its merits, but is the court required to follow Kavanaugh's lead here about what constitutes reasonable suspicion and such?
I remember when service providers would fight these laws tooth and nail, but the ones who can best afford a full-on legal offensive these days are just letting it happen. It feels like nobody cares anymore (although many people do).
I shudder to think what this will mean for YouTube videos exposing charlatans, grifters, and other liars. The subjects of these videos have no qualms about abusing YouTube's reporting systems to shut down criticism, and the worst of them are litigious to a fault. This new law will facilitate this kind of censorship, and YouTube being YouTube will simply take down the videos, and possibly shut down the channels, without any chance for appeal because the content has to be removed within 48 hours and deleted.
A new era of censorship awaits us, and I'm not at all confident the courts will overturn this.
The part about the government not enforcing the law doesn't bother me as much as the part about Trump using the law to cosplay a strongman: "sell us Tiktok or else!" There are laws on the books that governments should refuse to enforce, but the tactics being employed here are terrible.
Given the lack of safeguards like having a counter-notice procedure, one way to protest this law would be to file a bunch of complaints against non-NCII content to the point where ISPs have to either take down everything or exercise actual discretion in what they take down.
I wonder how these things would play if it were, say, the European Union instead of Russia and the sum were only 100% of Google's net worth instead of it being more money than exists in the entire world. If one is to support the EU's huge extraterritorial fines over GDPR and DSA violations then one must also support Russia's. The difference is more a matter of degree than susbstance.
It is so frustrating seeing all the laws popping up that threaten to destroy online anonimity, participation, and freedom of speech.
What is it about "infinite scroll" that makes it so dangerous? Is having people click on a "load more" button less "addictive?" Is a "load more" button more harmful than clicking on page numbers? Should a feed cut off after a certain point, even if users would like to see older content? I don't understand what's so evil about "infinite" scrolling. Also, was Meta really monetizing content involving child exploitation, like doing that especially? I find it hard to believe.
Appeals?
Ordinarily I'd say this wouldn't survive on appeal, but given the state of things all the way up to SCOTUS I'm no longer certain of anything. A sad day for advocates of an open Internet.
CAD files for gun parts, unlike currency, don't carry identifying markings like the EURion constellation. Identifying what is or isn't a "gun part" is an entirely different challenge and pretty much impossible to do reliably.
It's not just the code that runs the printer but also the content being printed. A document printer that by law stops you from printing "seditious" texts, for example, would constitute a kind of digitally-enforced prior restraint on speech. Even if the law were limited to stopping the printing of actual unprotected speech, there is no way to do this reliably in such a way that it doesn't affect protected speech.
Harmful and annoying, sure, but is it too much to hope in today's legal environment that it's also unconstitutional? To me it's like requiring that laser printers include filters against unprotected speech such as libel or incitement to violence. The false positives and pre-publication suppression alone would be enough to kill such a law as unconstitutional (I expect), so why should it not be the same for 3D printers? I expect some will say it's because 3D printers are used to make functional items, but anyone who's 3D printed a Baby Yoda or flexi dragon knows printing purely or primarily expressive works is a major part of the 3D printing world.
OpenAI's CEO has said that saying "thank you" to ChatGPT costs them "tens of millions of dollars." There is also the cost of training, although in that case the cost per capita goes down as more people adopt AI. Like so many environmental problems, though, it's not so much about individual behavior as it is about the system as a whole.
Agentic AI
"The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things." And this is the part that worries me most about where AI is headed. I believe the ability to just do things without a human veto and with perhaps minor human involvement will prove a big problem. People have lost the contents of their entire hard drives to agentic AI because of commands being run in a non-sandboxed environment. And the consequences could be even more serious: Critical vulnerabilities will be introduced into software in spite of human reviewers' best efforts. People will be denied for important things like jobs, loans, medical procedures because of agentic AI. Companies and organizations will send out false information to people, or forward confidential information to the wrong people because of agentic AI. And I'm sure there's a bunch of scenarios that haven't occurred to me. With unprecedented power often comes unprecedented danger, and my experience with AI suggests it makes enough mistakes that are often subtle enough to be really dangerous (without exaggeration). I am genuinely worried.
ID to use your computer
And if the UK government gets its way you may even be required to show ID to disable filters built into your phone or computer's operating system: https://arstechnica.com/tech-policy/2025/12/uk-to-encourage-apple-and-google-to-put-nudity-blocking-systems-on-phones/
Prove your age to use your phone
The latest news in the digital ID front is that they want device manufacturers to obtain proof of age before allowing people uncensored access to their own phones. https://forums.macrumors.com/threads/uk-wants-all-iphones-to-block-explicit-images-unless-you-prove-age.2474720/
Two of the three liberal justices
Given all that's been happening lately at SCOTUS with questionable ruling after questionable ruling, it is quite depressing to see two of the only three liberal judges leaning the wrong way. It helps that copyright isn't a pet conservative issue, giving conservative justices more room to be objective, but Jackson and Sotomayor have shown better judgment than this. They should know better.
That one and Ashcroft vs. ACLU, thanks to FSC vs Paxton as I have just learned.
It's as if Reno vs. American Civil Liberties Union had never happened.
Binding precedent
People keep saying even these shadow docket rulings establish binding precedent, but are courts required to follow the rationale laid out in Kavanaugh's sole concurrence? Presumably the case at issue still needs to be considered on its merits, but is the court required to follow Kavanaugh's lead here about what constitutes reasonable suspicion and such?
Constitutionality
This is technically unconstitutional, right? Any thoughts on how the Supreme Court would rule given its current makeup?
Where have our online rights gone?
I remember when service providers would fight these laws tooth and nail, but the ones who can best afford a full-on legal offensive these days are just letting it happen. It feels like nobody cares anymore (although many people do).
YouTube Takedowns
I shudder to think what this will mean for YouTube videos exposing charlatans, grifters, and other liars. The subjects of these videos have no qualms about abusing YouTube's reporting systems to shut down criticism, and the worst of them are litigious to a fault. This new law will facilitate this kind of censorship, and YouTube being YouTube will simply take down the videos, and possibly shut down the channels, without any chance for appeal because the content has to be removed within 48 hours and deleted. A new era of censorship awaits us, and I'm not at all confident the courts will overturn this.
Enforcement
The part about the government not enforcing the law doesn't bother me as much as the part about Trump using the law to cosplay a strongman: "sell us Tiktok or else!" There are laws on the books that governments should refuse to enforce, but the tactics being employed here are terrible.
Denial of service
Given the lack of safeguards like having a counter-notice procedure, one way to protest this law would be to file a bunch of complaints against non-NCII content to the point where ISPs have to either take down everything or exercise actual discretion in what they take down.
The Long Arm of the Law
I wonder how these things would play if it were, say, the European Union instead of Russia and the sum were only 100% of Google's net worth instead of it being more money than exists in the entire world. If one is to support the EU's huge extraterritorial fines over GDPR and DSA violations then one must also support Russia's. The difference is more a matter of degree than susbstance.