Arianity's Techdirt Profile

Arianity

About Arianity

Arianity's Comments comment rss

  • Jan 23, 2026 @ 02:26am

    That’s why the law itself doesn’t speak of creating incentives, but of removing disincentives (the Prodigy case, in particular).
    The law itself doesn't discuss it, but in the discussion around it, it does come up. For instance, from Mike: So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy...sites actually have a very strong incentive provided by 230 to moderate.. See also e.g. EFF The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content etc. Wyden himself described it as: If we’re going to really make sense for society, let’s tell those platforms we’ll give them the sword and they better use it, and they better it to moderate content and get rid of the filth and the slime and all this misogyny that people regret.... He's even more explicit here: the big internet companies have utterly failed to live up to the responsibility they were handed two decades ago...In years of hiding behind their shields, the companies have left their swords to rust. The companies have become bloated, uninterested in the larger good, and feckless. That's not foreclosing the option of not moderating content, but there's a clear intent there, and I don't think it's unfair for Paul to point at stuff like that, as long as he's clear it's implicit rather than explicit.

  • Jan 22, 2026 @ 11:55pm

    Yeah, I mean, KOSMA is trash. But it would be nice to know what an EFF-approved version might look like. Allowances for parents are a red line, but I suspect that isn't the only part they'd want changed. It's a necessary but probably not sufficient change.

  • Jan 22, 2026 @ 11:26pm

    Same. I just don't think doing the Carthago delenda est bit on every article regardless of topic is a good way to get there, I feel like that cheapens it and will make people write it off as a joke.

  • Jan 22, 2026 @ 11:17pm

    230 never promised to get companies to moderate. It promised to give them incentives to moderate by removing the threat of ruinous liability
    Right, but I don't think a lot of people, are going to see the distinction between those two sentences. The Senator clearly didn't. It's not trivially obvious, especially when what's missing from that conversation is what exactly those incentives are, and more importantly, their limitations. There's a lot of mushiness on what exactly "give them incentives to moderate" actually entails, which leaves people reading into it. It does set someone up to feel tricked/surprised when they get the latter sentence, thinking they were signing onto the former.
    It’s the 1st Amendment that prevents forcing companies to moderate their platforms, no law can get around that.
    Yes and no, you have to be a bit careful. The 1A prevents forcing companies from most moderation in general, but it doesn't sever publisher liability for a small subset of things that aren't 1A protected (namely, defamation). That's why Prodigy lost pre-230 under just a 1A defense, and 230 was needed (separately from the cost savings for defending a suit). And even with Compuserve, it was still ruled to be a distributor, it just didn't have the requisite actual knowledge to be liable. (Prodigy got hit with publisher liability because it actively moderated, Compuserve only got distributor because it didn't). In a case like Paul's where he gave them notice, both would be a problem. The Covid stuff would be protected under 1A, the defamation would not be.
    We actually can.
    We can, it's just that the article doesn't actually bother to. It's a fine argument, but the article just kind of blows by it entirely for some reason.

  • Jan 22, 2026 @ 07:45pm

    DOJ realized it had lied to a court All of it now documented in federal court filings—not that anyone will do anything about it.
    Sure would be nice if we could have a judge that actually did something

  • Jan 22, 2026 @ 05:25pm

    Instead of fighting this battle in court against the person who created this video, Paul has redirected his anger toward Section 230,
    He did get the person to take it down? the individual who posted the video finally took down the video under threat of legal penalty
    Paul insists this distinction is hypocritical because platforms removed his COVID-era statements they deemed as false while leaving up a lie about him. This argument collapses under its own weight. The Supreme Court has repeatedly held that private companies can make editorial decisions.
    Something being hypocritical is different from being legal, those are two different arguments. (And of course, one of them being defamation muddies the waters, since it's one of the few types of wrongness that isn't covered under 1A editorial protections). However, this is kind of missing part of the point. A big part of the justification for 230 is that it gives room and incentives for companies to moderate, even as it doesn't technically require it (as indeed, this article does, with 'over/under' moderation claims). If a company is choosing not to moderate, it's not unreasonable to point out that part isn't being delivered on as originally claimed.
    But newspapers choose what they print before publication. Platforms host speech created entirely by others, at unimaginable scale.
    Eh, for newspapers specifically yes, but distributors can also face liability for carrying other people's speech. And at pretty large scale to boot, albeit still much smaller than the internet.
    The real, speech-protective answer is defamation law. If Paul believes that a video contains lies about him, he could sue the creator for defamation and prove actual malice under the Sullivan standard.
    Paul's oped directly mentions why he finds that solution lacking: Yet, the defamatory video still has a life of its own circulated widely on the internet and the damage done is difficult to reverse. (And that's assuming you can find the person to sue them in the first place, afford it, they're subject to U.S. jurisdiction, etc.). Even if his is a bad solution, I do think you need to actually grapple with why he doesn't think that's an answer. Not just ignore it. It's fine to make the argument that the benefit is worth the cost (or that the cost isn't so large), but you do have to actually make it.
    But we cannot and should not dismantle the legal foundation of online speech because it failed to protect one powerful man.
    What happened to that one powerful man could just as easily happen to someone marginalized (indeed, they would find it much harder to actually pay for a defamation suit). It is a bit hypocritical of Paul to only flip once it actually happened to himself, but it is not a situation that is unique to the powerful.

  • Jan 21, 2026 @ 08:32pm

    one might think that’s a big change, and that today’s rules let kids wander freely into social media sites.
    Because they do, in practice? You said so yourself: Of course, everyone knows many kids under 13 are on these sites anyways.
    This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet
    It's both.
    Parents increasingly filter, supervise, and, usually, decide together with their kids
    The entire reason these discussions exist is because they so many don't actually do that. While parents are aware of their kids being on social media, that doesn't imply they're making informed decisions on the risk, or being a responsible parent.
    It will also lead to more power concentrated in the hands of the companies Congress claims to distrust.
    That's the cool thing about liability based incentives, it doesn't require trust.
    If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone.
    EFF's universal solution to every problem: universal privacy protections! This does nothing to even attempt to fix the actual problem. It's especially funny given the COPPA mention, which manages to leave out that COPPA already has privacy protections for kids under 13 (or their lack of enforcement). Wonder why?

  • Jan 21, 2026 @ 08:09pm

    Missed this on the original read through, but it bears emphasis:

    But the moment the system produces an outcome he doesn’t like—even though it worked exactly as designed and the video came down anyway—
    It's worth mentioning that part of how this system is supposedly designed is that there are sites actually have a very strong incentive provided by 230 to moderate. So, not quite exactly.

  • Jan 21, 2026 @ 07:21pm

    essentially as a private citizen.
    It really depends on how he did this. If he's threatening legislation in response (implied or otherwise), it's not really as a private citizen. Legislative threats can be coercive. They're not the same thing, but they are both types of leveraging government coercive power on speech. Mike seems to be assuming that "formally notified" necessarily means via his office, but it isn't clear? It could've just been a generic legal notice as a private citizen. If it was a private notice, and this change in stance on 230 is coming out after the fact (so it's not meant to be coercive during the removal process), it's fair game in terms of 1A. FWIW, I also read "formally notified" as his office sending a notification, but I can't find anything that confirms it.
    Rand’s next step is to subpoena and sue.
    He can't sue YT for it under 230. They're not liable. (That said, the article mentions it seems it was already taken down by the originator).
    You are straight up lying here. SCOTUS found the states didn’t have standing, essentially refusing to rule on the subject.
    The reason they found lack of standing is in part because of lack of 1A violations (with the possible exception of Hines, whose lack of standing was for future violations). e.g.: The plaintiffs who have not pointed to any past restrictions likely traceable to the Government defendants (i.e., everyone other than Hines) are ill suited to the task of establishing their standing to seek forward-looking relief. But even Hines, with her superior showing on past harm, has not shown enough to demonstrate likely future harm at the hands of these defendants The primary weakness in the record of past restrictions is the lack of specific causation findings with respect to any discrete instance of content moderation. But they fail, by and large, to link their past social-media re-strictions to the defendants’ communications with the plat- forms. There are some more detailed quotes,e.g.: There is therefore no evi-dence to support the States’ allegation that Facebook re-stricted the state representative pursuant to the CDC-influenced policy...This evidence does not support the conclusion that Hoft’s past injuries are likely traceable to the FBI or CISA....Of all the plaintiffs, Hines makes the best showing...That said, most of the lines she draws are tenuous, particularly given her burden of proof at the pre-liminary injunction stage

  • Jan 21, 2026 @ 05:55pm

    The fact that there are no real comprehensive studies that show the opposite… well…
    There are in fact real studies that find the opposite (including ones that use the cross lag approach used in e.g. Cheng that you're citing). You typically discount them as correlational so therefore not counting, but they do exist. (and to be clear, there are others that don't find evidence of it. It is by no means one way). There's also a few studies like Braghieri 2022, which is also comprehensive, well cited, and causal. And for some reason, never gets acknowledged either.

  • Jan 21, 2026 @ 03:32pm

    He didn’t “change his mind” on Section 230. He just revealed that he never had a principled position in the first place.
    Rand Paul, a hypocrite? A Republican who flip flops the moment they're personally effected? I'm shocked.
    He says this as if it’s controversial. It’s not. It’s exactly how editorial discretion works. The company gets to make their own editorial decisions
    I mean, it is controversial. Legality (and liability, 230 etc) aside, large companies having editorial discretion is itself pretty controversial.
    So what, exactly, is Paul complaining about?!?
    Probably this part: Yet, the defamatory video still has a life of its own circulated widely on the internet and the damage done is difficult to reverse.

  • Jan 17, 2026 @ 07:57pm

    You seem to have worked up the courage to get over that one
    Ah yes, because that precludes ever trying to make the world slightly better.

  • Jan 17, 2026 @ 04:55am

    (Personally for the latter I would just have a big fat “PLACEHOLDER” folder that all AI-generated materials should go into that should be completely deleted by the time the game goes gold.)
    While that is a start, from what I've heard is many companies already do this, and stuff still slips through. All it takes is one contractor, or someone working at home late on a deadline, etc. It's part of why people use standardized stuff like Lorem Ipsum, which you can at least CTRL+F. And even that gets missed sometimes.

  • Jan 17, 2026 @ 04:45am

    They’re a sort of religious beliefs
    So are the in between answers. Ultimately they all go back to values of how you balance different priorities. If you value the human expression part above all else, you're going to get a very different answer than if you see it as more of a tool. No view is inherently right, they're fundamentally subjective, but they do involve things like ethics/morality, aesthetics, etc. Same calculus, different weights.
    But he’s also clearly answered “yes” to rhetorical question #2 I posted above.
    Eh, not really? There's a bunch of different ways to get to his position without answering "yes". Even reading through the whole article, he doesn't seem to say anything at all about the industry as a whole, just how he deals with it.
    But rather than trying to figure out how to QC the developers to make sure the end product is clean of AI, since that seems to be what Bender is after, we get a blanket ban on all AI use everywhere, all the time, by the developers.
    Reading the article does not seem to suggest it's just the end product that is the concern here. e.g. moral issues surrounding tools trained on plagiarizing other people’s work.. That said- The experience for a lot of people (AI-enjoyers and haters alike) is that there is fundamentally no way to guarantee QC. The nature of it is you risk something slipping through, because there are no foolproof AI detection methods post facto. The difference is how willing they are to tolerate inevitable slipups, not whether you tried to find a QC method. Even with a ban (which ends up being a form of QC, mind you), it's not 100%.
    AI will be used in gaming.
    And paintings will be mass produced by robots. Doesn't mean a painter can't still choose to hone his craft because he appreciates it as an artform for expression. To this day, there are artists who do not use any electronic assistance in their art. Part of the how it will be used in the industry conversation is how/when/why some opt out.

  • Jan 15, 2026 @ 08:06pm

    Seeing the "textualism" guy appeal to common law is.. a vibe. Although I'm not sure how much it really matters? It seems like you can drive common law or "reasonable expectations" either way (especially with this Court), what really matters is the attitude when applying them. The Court has shown it has no problem cherrypicking history when it's inconvenient.

    it seems like it could be read to suggest that it may be time for litigants to take another swing at challenging the government’s warrantless electronic surveillance,
    Is there any appetite among the rest of the court? Doesn't really seem like something they've been chomping at the bit to fix, but I can't say I watch the court that closely.

  • Jan 15, 2026 @ 02:51pm

    This is inevitably going to sound snarky despite not intending to be, but weren't you against the UK being able to regulate US companies? Because this is exactly the sort of sovereignty issue that comes up.

    You don’t get to spend years claiming that national security justifies any restriction on platforms and then suddenly discover that “free speech” means other countries can’t enforce their laws.
    Eh, yes you can. Using national security doesn't mean you can't be worried about speech in other contexts. Not every justification is on equal/similar moral footing (even taking for granted that moral authority is the correct lens when it comes to international relations, which isn't always the case). The point is moot because they don't actually care about free speech, but that argument is flawed.
    The UK is investigating potential violations of laws against generating sexualized imagery of minors and non-consenting adults. If the State Department thinks that’s “censorship,” they should explain why the Senate just voted unanimously to let victims sue over exactly that conduct.
    TBF, they're both censorship. Even under the definition TD usually prefers.
    There are no principles here, only sheer abuse of power.
    There is the fundamental conservative principle- there's an in-group and an out-group. They're mad because the out-group is trying to regulate the in-group. Nothing else matters, to them. Not consistency, morality, hypocrisy, or anything else. (As an aside, to anyone still wondering how the Trump/Elon "break up" was going, note who is getting in-group protection here).

  • Jan 13, 2026 @ 07:47pm

    That’s not a story about two bad actors. That’s a story about which democratic system still has functioning antibodies against authoritarian overreach—and which one doesn’t.
    A good reminder that a better world is possible.

  • Jan 12, 2026 @ 04:42pm

    Not a whole lot. They can bring it back to court. Beyond that, it's mostly promising (and following up on) consequences when they're back in the majority. On the extreme end, they could bring enough men with guns to enforce the law, but there doesn't seem to be much appetite for that. And getting enough trustworthy men is tricky. It's a somewhat open secret that most cities don't trust their police departments to follow those sorts of orders.

  • Jan 09, 2026 @ 09:37pm

    It’s still a shit comparison
    It's a shit comparison on some aspects, and it's a fine comparison on others. For some reason EFF mentions both. It actively undermines the former when you include the latter. Just do the former, it's better.
    and not equivalent at all,
    Yes, which is why I specifically said it wasn't equivalent.
    Which is too bad, as you can be pretty good 1/10 when not doing the most ridiculous devil’s advocate impression.
    I'm not playing devil's advocate, I'm saying stick to the parts of the comparison where it's a shit comparison instead of shooting yourself in the foot. When you mention aspects that both do that don't fit EFF's criteria, you're actively going to make people conflate them more and weaken other parts. That's not helpful. Why do you think this is a good way to do it?

  • Jan 09, 2026 @ 09:20pm

    What kind of scenarios do you think exists were in-person id-check outs someone as belonging to a marginalized group? Buying liquor and porn-mags?
    Yes. (Although not just an id-check, it can also be visual inspection, as mentioned by the article. Both coming with risks of being outed.)
    With in-person id-checks someone has the choice where they will do business knowing that the transaction is ephemeral, no information is stored anywhere whereas doing the same on the internet means they have no choice than to accept that the information will be stored and processed and perhaps later sold to a 3rd party or even vacuumed up by the government.
    That doesn't make things like compromising rights/anonymity not overlapping. That's just reiterating what I said earlier- some aspects (whether it's ephemeral) are indeed different. That doesn't prevent other aspects from overlapping. It's still compromising of rights and removal of anonymity while also being ephemeral. You might be ok with compromising rights/removing anonymity if it's ephemeral, because that changes the overall risk profile, but they're still happening. And that matters if the core argument is that they should never be compromised, as EFF is arguing, rather than one risk is acceptable and one isn't.

More comments from Arianity >>