Arianity's Techdirt Profile

Arianity

About Arianity

Arianity's Comments comment rss

  • Mar 23, 2026 @ 05:35pm

    Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole,
    I have a bridge to sell anyone who actually believed this. It was always pretty clear this was just pretextual for him

  • Mar 23, 2026 @ 03:37pm

    Streaming’s a bit different because users still have options and agency.
    Maybe someday they'll actually use it.

  • Mar 23, 2026 @ 03:21pm

    but it’s still better than any of the options she has seen. To understand why Daphne’s right, let’s think about what Afroman’s case might have looked like without Section 230.
    You can't explain why something is the best option by cherrypicking exactly one alternative. At best that just tells you it's better than that one option, and at worst is an intentional strawman.
    The upshot is: Section 230 created the conditions that allowed us to hear Afroman’s songs, and allowed platforms to recommend them, even while their status was in legal limbo.
    It also allows us to hear them, even if he had definitively lost, as you mentioned. Left unsaid is why this needs to be the case even after the case was ruled on.
    And to be clear, we have evidence that this is how they would react: That’s the incentive structure currently in place under the Digital Millenium Copyright Act (DMCA).
    The DMCA obligates them to take it down in order to maintain safe harbor. That's not the same incentive structure.
    But it’s the best and most effective protection for free expression online we have, allowing online services to simply let their users speak
    I mean, the best protection for speech would be to simply get rid of defamation entirely, if freespeechmaxxing is what you want.

  • Mar 23, 2026 @ 02:26pm

    Regulating social media ad targeting is a different problem entirely, since they don’t “sell” data the way data brokers do (they sell access to users based on profiles). Regulating AI training is something else again. And conflating all three is how you end up with rules that address none of them.
    Eh, I mean if your goal is comprehensive privacy laws, if you just end up banning it across the board, you don't really need to sort out the differences as much. And I don't think TD usually maintains this distinction when talking about privacy laws, either. "Comprehensive privacy laws" that come up talking about e.g. Facebook usually don't take the time to distinguish between data brokers. I think this is fine, in a sort of broad overview?
    I’m presenting mine as proof that the entire exercise is a deeply silly waste of time.
    Whether it's a waste of time depends whether Sanders realizes what he was doing, I think. It is a bit awkward, but part of getting regulations about this passed is marketing. And this was a great marketing stunt. Where it becomes a problem is if you use the marketing as a foundation for policy evidence. It would be better if everything was properly nuanced, but I'm not sure that's the world we live in. And I have to admit that this ad broke through. You're assuming it was unintentional (and it probably was), but I do wonder if it was.

  • Mar 22, 2026 @ 02:33am

    Fun fact. Every single supporter of these laws will argue that 40,000 kids in the US deserve to die from guns.
    That's the thing though- not every single one will. For every Marsha Blackburn, there's an Amy Klobuchar who will vote for this sort of bill, and then turn around and vote for gun control/healthcare etc as well. (just picking Klobuchar somewhat randomly as an example here) The bill in the OP is in blue Minnesota. There are multiple legislators there who have voted for those things. There are plenty of blue states that are passing these sorts of laws. These laws aren't happening on party line votes. If you look at the Venn diagram between people who support these bills and who are GOP, there's significant overlap, but it's not a circle. The religious nuts and such are just one faction. That doesn't mean the Blackburns don't matter or aren't a concern, but it does mean not every Klobuchar is a Blackburn. And that's just in the U.S. itself. Places like Australia have the gun control, and the healthcare, and are still passing these types of laws, too.

  • Mar 21, 2026 @ 12:28am

    I’d argue that there’s a third result that made things worse in that the kids impacted were just shown that those in charge of the government not only don’t know what the hell they’re doing but cannot be trusted to act in good faith at least when it comes to them.
    That's arguably a plus, in this day and age...

  • Mar 20, 2026 @ 04:15pm

    We will continue to fight against all online age restrictions,
    Kind of gives the game away here, doesn't it?
    It’s about whether “protecting children” becomes a legal pretext for embedding government control over the internet to enforce specific moral and religious judgments—judgments that deny marginalized people access to speech, community, history, and truth—into law.
    You just admitted it was much broader than that!
    Because that is not controversial: everyone wants kids to be safe.
    It apparently is. Keeping kids safe is by it's very nature going to restrict kids from things, to some degree. And that's going to require some type of moral judgement about what is, and is not safe.
    After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates.
    You wouldn't expect that from privacy advocates. Privacy advocates would be more worried about privacy. Child safety and privacy are in direct tension with each other, even in the best of times. And that's exactly what you see from EFF. You do however, see plenty of child development experts speaking out in favor of it. While anti-LGBTQ pretexts are a thing, it's not the only constituency for these laws, and it's disingenuous to pretend they are. There's a reason anti-LGBTQ+ groups use this as a wedge issue. The thing that makes it effective is precisely that it's not entirely pretextual.

  • Mar 20, 2026 @ 02:20pm

    The ban creates a fiction — kids are off social media — that every politician and regulator has an incentive to maintain, even though the data says the fiction is exactly that.
    That's literally not what's happening, though, and your quote shows it. If they're "doubling down on blaming companies", that's not maintaining the fiction. That's acknowledging flaws that blow up the fiction.
    Why have conversations with kids about healthy usage of something they’re not supposed to be using?
    The same reasons sex ed and drug education exist. It's standard to have conversations with kids about things they're not supposed to be doing. Even if we take the assumption that these policies work perfectly (which we don't), you still have to teach kids to eventually be functioning adults. They still get worked on. (Plus in this case, many platforms aren't banned) That said:
    When you pass a ban and declare the problem solved, you eliminate the political pressure to do the things that would actually help
    You have this dangerously backwards and are underestimating your opponents, I think. The goal of this is to normalize age verification. Paradoxically, weak enforcement is actually somewhat of a plus, because it means weaker initial blowback. And it's working- age verification went from nowhere to everywhere. Once it's normalized, the next steps are more coercive. Fines on the platforms, and then the users. Just like things like alcohol laws are. You're getting frog boiled.

  • Mar 20, 2026 @ 05:27am

    On the other hand, it takes a break in the middle to argue that “information discovery” is somehow distinct from “insider trading”. It isn’t, and never could have been. The entire basis of prediction markets as “information discovery” is that insiders will use them to place bets, and in doing so spread that information to non-insiders who are observing the betting.
    That's not actually what the article says. Because people with insider information will bet, they believe that the markets will provide the public better information. But it also creates ridiculously perverse incentives for extraordinarily bad behavior. It's saying it creates bad incentives, not that it's not information discovery. That said, they are distinct, and the reason why is in the name "insiders". Insider trading is a form of information discovery, information discovery is not a form of insider trading. A square is a rectangle, a rectangle is not a square. We ban insider trading in financial markets for a reason. Generally speaking, it's considered illegal for an employee at a company to buy/sell stock because e.g. they have information about an upcoming merger. It's not illegal for an outside analyst to figure out a merger is likely, and trade on that. An outside analyst is not considered an "insider" in this context, despite having the same fact. We set explicit limits, because while information discovery is important, knowing you can get scooped by e.g. an employee ruins the whole thing. There's no point trading if you know it's rigged. Or worse, if the CEO can base their decision to accept a merger based on their own bets, that's not actually conveying information. (There are some caveats, particularly in the U.S.: Insider trading law tends to be formulated around not trading information you don't own- e.g. the employee has a duty to their employer not to trade on inside info. It's not legally based on fairness even though that's how everyone treats it. But that's more or less the end result. Also, companies in some limited cases can 'inside trade' on e.g. commodities markets when hedging, etc). We could allow CEOs to insider trade, instead of do things like 10b5-1 plans. We don't, for very obvious reasons.

  • Mar 19, 2026 @ 03:45am

    RAM is effectively a commodity with multiple providers. You can bet that they are ramping up production. We’ve seen temporary supply shocks in the past on components and they tend not to last.
    This is true but it's not quite as rosy as it sounds. While it is a commodity, there are only a few big suppliers: Micron, SK Hynix and Samsung. The big 3 have 90%+ marketshare. The other brands you see buy from those suppliers. e.g. Kingston/Corsair buy their DRAM chips from Micros/SK Hynix, but make their own modules. iirc both Samsung and SK Hyniax are sold out of production through 2026. Micron's sold out of HBM, but not sure about regular DRAM. The big 3 have also been busted in the past for price fixing. But even if that's not a concern, spinning up new fabs takes years. Assuming they want to make new fabs, there's nontrivial risk in building new fabs if demand falls. They can ramp up production to a degree by retrofitting existing fabs/lines, but that only goes so far. Won't be forever, but it's gonna be rocky for longer than it might seem at first blush for a commodity. There's some potential wildcards with stuff like Chinese manufacturers, though.

  • Mar 18, 2026 @ 05:50pm

    Democrats historically suck on media policy and reform (even the progressive wing of the party is fairly incompetent on the subject), so you can’t expect much help there.
    Even if they were, there'd be much handwringing over the First Amendment.
    And the public still has agency. Larry Ellison can buy TikTok and Elon Musk can buy Twitter, but they can’t control the flow of the public as they flee to other, less white supremacist, right wing friendly alternatives. It’s sheer hubris to think they can maintain information control in a country this massive and diverse
    It may not be full control, but the likes of Fox and Twitter have done plenty of damage in the meantime.

  • Mar 17, 2026 @ 07:15pm

    Although this legal authority has lapsed, it has always been our fear that it will not sit dormant forever and could be reauthorized at any time.
    Does this matter? If you have the votes to reauthorize, presumably you have the votes to just pass a new bill? I guess there's some stumbling blocks with the filibuster

  • Mar 12, 2026 @ 06:05pm

    but “Koch-funded organization opposes government regulation” isn’t some kind of shocking revelation.
    The revelation is that it's Koch funded in the first place, since the only label it has is nonpartisan. It's not surprising if you already knew it was Koch funded, but it's conveniently not mentioned.

  • Mar 12, 2026 @ 03:23pm

    Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.
    The inability to control LLMs seems like it's a significant confounding factor (to say nothing of the intentionally "spicy" LLMs aimed at kids from companies like Meta )
    Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users.
    The problem is that all of these have failed. The ability to bypass safeguards seems baked into how they work, at least right now. To say nothing of e.g. privacy issues if they're monitored.
    Because these bills are content-based, the court would apply strict scrutiny.
    This doesn't seem consistent with Paxton (which, mind you, should've been strict scrutiny... but wasn't). AI is going to be an invaluable and necessary tool for kids. But the way AI companies are handling it now seems pretty irresponsible, and it seems difficult to wrangle the genie in a bottle even if they were. Literally the reason we're where we are is because companies like OpenAI wanted to release stuff faster and break things while slower companies like Google were working out the safety implications ahead of release.

  • Mar 12, 2026 @ 03:12pm

    I don't trust Josh Hawley, but I also don't trust a Koch-backed "nonprofit" talking it's book to tell me what's in the bill either. Kind of a 2 snakes don't make a right situation. Especially when it comes to Hawley, it's important to get it right. He has a history of proposing "good" (or at least populist) bills he knows will die to get positive press.

  • Mar 11, 2026 @ 06:15pm

    Skimming the case, this seems to be more than embedding links in good faith. Newsbreak seems to be using it to show actual content in bad faith; exactly the sort of situation secondary liability is meant for?

    But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing
    This feels like it depends heavily on how exactly it's embedded. One can certainly embed content that is functionally displaying it.

  • Mar 11, 2026 @ 03:11pm

    direct exfiltration of data in a manner known to break the law, but zero concern over that fact, because of the assurances of a Trump pardon if caught.
    It needs to be a priority of the next administration to make sure every instance of this is very publicly punished, pardons or not.

  • Mar 10, 2026 @ 07:22pm

    Yet these proposed interventions rest on the assumption that technology is the primary culprit, even though research increasingly shows that, in the right contexts, technology can actually help those in crisis.
    They can be both, for different people in different contexts. Perhaps more fundamentally though- we do in fact regulate things even if they are the secondary culprit. Even good regulations are often ultimately about addressing symptoms that go back to an underlying human/societal problem.
    We don’t and can’t know for sure why Setzer or anyone else died by suicide
    You sure seem confident that you do: In short, technology doesn’t cause suicide. And this isn't supported anywhere in any of the evidence cited. This article is pretty disappointing. It's presented as a sober, nuanced and evidence based argument, but ends up being just blind apologia. While it acknowledges the often neglected human aspect (great!), it's just as myopic in downplaying how technology can interact with or amplify those problems. Or even discussing whether any parts are reasonably addressable or not. It doesn't even acknowledge the reason companies like Character AI and OpenAI are being targeted, which is in part due to specific design decisions they've made that made this result more likely. Decisions other more responsible companies didn't make. The most we ever get is a throwaway line about True, technology companies can—and should—consider how to help mitigate real-world harms.
    The better question, then, isn’t whether technology causes harm, but whether it deepens an already broken baseline—or simply reflects it.
    It would be nice if this article actually spent some time trying to answer that question, instead of asserting innocence. But even this framing misses something important- even reflections can sometimes be worthy of regulation. Funnily enough, while this article paints pushing for regulation as a moral panic, many of the same people who want to regulate technology are also simultaneously the ones asking those uncomfortable questions about how underlying society contributes to suicide. It's not an either/or.
    But we may uncover that teen suicide isn’t random at all. It may stem from something we’ve unwittingly ignored
    You don't say.

  • Mar 10, 2026 @ 07:00pm

    Kind of surreal to see this on Techdirt, of all places. This nails exactly why the pro-AI articles get so much pushback, and it'd go a long way if other writers incorporated it into their writing beyond just a throwaway sentence. Really appreciate your writing on this, Karl.

  • Mar 10, 2026 @ 03:24pm

    If Alice stabs Bob to death with a knife, should we hold the knife manufacturer accountable for not implementing the necessary safeguards to protect against murder? Fuck off.
    You joke, but we do in fact regulate things like this. Not knives specifically, but other products. Generally, it comes down to two questions: a) Could the knife company have reasonably done anything better? b) Do the benefit of knives outweigh the harms when they're misused, that we're willing to bear the cost of when they're misused?

More comments from Arianity >>