Plaintiffs argue that features like infinite scroll or “like” buttons create harm independent of users’ personal content. It is a creative argument. It is also a slippery slope with no clear limiting principle.
I mean, the limiting principle is whether it actually causes harm or not.
Newspapers, magazines and even packaged goods design headlines with catchy taglines to capture attention. Platforms do the same with feeds, to deliver value to their users.
Well that's ultimately the question- is it actually the same? Not all methods to capture attention are necessarily equally harmful or not harmful. There's a reason no one is suing any of those industries, despite it being free money for trial lawyers.
A national privacy law that protects personal data, including children’s information, would provide real safeguards while giving companies a consistent set of rules
What safeguards would this provide for this issue? Privacy is good on it's own merits, but this literally does nothing to address the concern.
Similarly for the other two suggestions in terms of tools/parents, those already exist, and they have not been terribly effective to date.
None of this means concerns about children’s online experiences should be dismissed.
If you're going to suggest things like privacy laws as a solution to a completely different (potential) problem, you are effectively dismissing it. Why should this be treated as anything other than a dismissal? You didn't even attempt to give an actual argument for how they're more effective.
The revised bill also sharply increases penalties. Instead of $100,000 per violation, companies—including small developers—can face fines of up to $250,000 per violation, enforced by both federal and state officials.
That kind of liability creates incentives to over-restrict access,
I don't really see the issue with the amount, just with how the offense is defined. If it's target properly, it should be punitive.
But the right answer to that is targeted enforcement against bad actors
It would be nice if this included what that's actually supposed to mean. It's not clear how this is possible, while being compatible with EFF’s core speech, privacy, and security issues. It seems like any type of enforcement would violate EFF's core speech, privacy, and security concerns.
So, if you need a good headline to claim that you’re “protecting children” and doing so in a way where the law will have little direct impact on your business
That's not really regulatory capture, nor much of a moat. But yeah, seems pretty transparently like political horse trading. Then again, isn't it always with companies? Easy trade, if you're an amoral company who only cares about profits.
That is a real problem, but that requires a troll to actually submit a take down notice. It doesn't seem like anyone did, in this case.
It's a problem but not relevant to this particular situation.
The decision does not do what you say it does. It just says the lower courts need to review the law in question under a different test than it did the first time.
Not yet. But I don't see how the follow up doesn't inevitably end up there eventually, given your preferred 1A interpretation. The same logic that gave us stuff like 303 Creative leads there, and you're not exactly a fan of strict scrutiny.
The whole point of defending the civil rights of people you don’t think deserve them is to ensure that those rights apply to you if, say, you piss off the government.
I recognize that this SCOTUS doesn’t do that, but I hold those principles regardless of SCOTUS because it’s more ethical to believe in those principles than to it is to abandon them because “the worst people” are,
It's not really the principles that are the issue, I think, but specifically makes protecting that right for all other people that much easier. That part doesn't hold regardless of what SCOTUS does, whether it's actually easier or not is pretty tied to SCOTUS.
The whole theory for why it's supposed to be easier is that "the worst people" are supposed to be restrained from selectively abandoning those principles.
How can websites possibly comply with the law without determining whether a given user is physically located in the state?
I mean, they can't. But I think they were just as screwed with the old version? It was also broken, I don't think they were off the hook if someone used a VPN with the old bill.
I'd call it a "clarification" rather than a new requirement, but you can't really clarify something that's impossible to comply with.
the amended version says service providers have to determine whether the person is physically located in Utah
An individual is considered to be accessing the website from this state if the individual is actually located in the state, regardless of whether the individual is using a virtual private network, proxy server, or other means to disguise or misrepresent the individual’s geographic location to make it appear that the individual is accessing a website from a location outside this state.
The quoted section doesn't actually say anything about having to determine whether they're physically in Utah? Outside of the issues with age verification in general, this new wording doesn't seem notably worse than the old one
But they shouldn’t be liable for results of the product based on how people misuse the tool.
If that's what you were trying to argue in the main article, that is not at all the point I took from it.
That said, I think this depends entirely on what measures they take to ensure people don't misuse the tool. Products have all kinds of safety measures and warnings to try to minimize people misusing the tool. Now, if it gets misused anyway, that's not really their fault. But we do hold them liable if they don't do those things to minimize it. And with medical decisions in particular, we often go a step further and require it to be tested in advanced, or dispensed by an expert.
It simply being misused isn't by itself sufficient to avoid liability.
Liability discourages activity, sure, but in this context, when you have non-determinative outcomes in a field where even experts do not know what leads to successful and unsuccessful outcomes, adding liability based on those outcomes doesn’t actually discourage “negligence” or the bad outcome.
I don't think liability should be based on outcomes, it should be based on actions. But that doesn't sound like what you're advocating for, unless I'm misunderstanding.
This isn't any different from how I'd expect a mental health professional to be treated. If a suicidal person goes to a therapist and kills themselves, the therapist isn't at fault just because the suicidal person killed themselves. And they're not going to get nitpicked because they went with approved treatment strategy A instead of B. However, if the therapist was freestyling untested treatments without IRB approval, that's another story. They should be liable.
The standard for liability should not be "did someone die", it's "did you fuck up, and did you fuck up in a way that was predictable or reckless". You can't easily tell if something causally pushed someone over the edge. What you can tell is releasing a halfbaked model like GPT4o to the public was a bad idea. Or that OpenAI ignored it's own internal flags being tripped in the Adam Raine case. The thing that should trigger liability is ignoring the flags, not the fact that Raine actually acted.
I think you have a very keen eye when it comes to how incentives can limit experimentation of good actors. But I think you underweight how those incentives work for bad or reckless actors.
It discourages even experimenting to figure out how to make better outcomes.
I mean, that depends on the form of liability, right? There's room for medical professionals to test new types of treatments. But it's very carefully on guardrails in terms of what's allowed, even if that discourages some experimentation. Things like IRB panels exist for a reason.
I'm not against allowing some experimentation to figure out better outcomes. But I'm wary about liability shields, especially one being compared to 230. Free market experimentation is not the type of experimentation we use for medical interventions.
There's an old aspect, and two new aspects. The disruption part is not new, just unsolved. In all those past examples of obsolete jobs, people's lives got absolutely ruined in the transition. We just as a society didn't care. That doesn't mean we shouldn't mass produce shirts, but we should have some plan for tailors that allows them to live decent lives as we transition that isn't just "well, sucks for you, good luck". It was bad then, and it's bad now.
There are two new aspects- One, in terms of what type of labor. See this post from economist Brad Delong: labor is actually six dimensions: (1) backs, (2) fingers, (3) brains as routine cybernetic controls for mechanisms, (4) brains as routine cybernetic mechanisms for accounting operations, (5) smiles, and (6) creative ideas. This is genuinely new in terms of what types of jobs it applies to. And second, it's genuinely new how broadly applicable and how quickly evolving it is.
If we’re having people do “bullshit jobs” just to prop up our economic system, that’s a problem with the system;
Yes. This would be much less of a problem if we had a system like UBI. We don't. The question is how to get there.
and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards.
Because it is- when we want to encourage certain behaviors, and discourage others, we generally apply liability. If you want to encourage doctors to take patient care carefully, do we make them immune? No. If we want there to be more access to mental health care professionals, do we just drop requirements? Also no.
It's particularly galling because we see these companies doing things that ruin any trust, despite potential liability. To say nothing of hallucination. You mention a few "tragic" stories, but the part you're leaving up is how completely badly companies like OpenAI screwed it up. Maybe not in a way that is provably causal, but negligent? Absolutely.
is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of:
If you want something modeled on 230, 230 does not stop punishing particular behavior. It is unconditional. You're complaining about comments saying it shouldn't be universal liability protection, but you're explicitly comparing it to something that is unconditional? I don't get why you made this comparison. Even Ron Wyden has said that companies have not lived up to the liability protection they've gotten.
If you want to avoid backlash, I think you would benefit by mentioning what things should lead to liability more concretely. It would go a long way, and that part is left pretty nebulous.
we need a liability environment that doesn’t punish the attempt.
You want a liability environment that doesn't punish good faith attempts, but does punish bad faith or negligent ones. A 230-like framework explicitly does not do that. In context, you can argue that for 230, on First Amendment expression grounds, "bad" stuff deserves protection. That doesn't apply to professional medical care.
Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability
How can you hold a site that has actively bad moderation legally accountable, again? You can't. It's all market accountability, and that's not something we consider acceptable for medical practitioners. And looking at companies like Facebook/Twitter, it should be clear why.
The DNC has nowhere similar to go. For one thing, Democrats are not in power right now
For what it's worth, I don't think it's useless to signal what you're going to do when you are in power. This is a repeat game, CBS still needs to worry about what say, a 2029 looks like.
For another, its voters expect better from them while MAGA voters are happy the more corrupt “their” administration is.
There isn't great official polling on it, but there seems to be a very high appetite among the base to actually enforce stuff. I'm not sure this would go over poorly, as long as it's clear what it was.
Is it that complicated for you to understand the difference between what is Constitutionally not allowed and the idea that there are other ways to deal with the consequences of that?
I see the distinction, but I wouldn't call it disagreement with the ruling. "Citizen's United was wrongly decided", "Citizen's United was correctly decided on the Constitutional limits but causes problems that can be addressed in other ways", "Citizen's United was correctly decided for corrupt reasons", and "Citizen's United was correctly decided" are four different opinions. Disagreeing would be the first one, the other three aren't disagreeing with the ruling itself. You're saying the second- that's not a disagreement with the ruling, which is why you're saying it was correctly decided and you don't think it should go.
There are two ways to agree with a ruling. You can agree it was correct on the merits but has bad secondary effects, or you can just agree with it. There's an (important!) distinction, those are not the same thing, but they are ultimately both forms of agreement. I'm not saying to ignore the distinction. But at the end of the day, you are still arguing to keep it.
And, more importantly, it is a strong First Amendment win, with language that will be useful in later cases, including ones where more liberal positions have been impacted by government overreach
I can accept rulings I disagree with, where I can see and understand the Constitutional logic behind them. For example, while I agree that the post-Citizens United change in campaign finance has been disastrous and needs to be fixed, I think the actual ruling in that case is not just defensible, but correct on the law (i.e. I think the fixes to campaign finance should come from elsewhere, not from getting rid of that ruling).
Similarly, while the underlying hatred and bigotry animating the decisions in 303 Creative and Chiles v. Salazar are deeply problematic, the actual rulings make some level of Constitutional sense on First Amendment grounds.
If you don't think they should be overturned, doesn't that mean you agree with those rulings? You just acknowledge the consequences.
That is, at the time of this writing, the most that Sony has said about whatever the hell is going on here
There was apparently a statement yesterday: https://www.gamespot.com/articles/playstation-users-report-new-online-license-checks-for-digital-games/1100-6539651/
an SIE spokesperson telling GameSpot the following:"Players can continue to access and play their purchased games as usual. A one-time online check is required to confirm the game's license, after which no further check-ins are required."
And, again, that makes all of this very unhelpful if you want to get into some real discussions about where this technology should be used and where it shouldn’t.
Getting people to actually consider the problem before it inevitably happens, actually seems rather helpful. It is a real discussion- people like Cain genuinely want to use it that way, and they don't think they even need to hide it.
AI needs to be a tool on the perimeter, not the creative force itself. I don’t want the pen telling me the story of Odysseus; I want the writer to use the pen to do so.
If someone's response to that was ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use. it would probably feel pretty bad, eh?
If you don't want this to happen, it's worth discussing how to disincentivize it from happening. Because people like Cain seem confident the market isn't going to do it on it's own- quite the opposite.
That is a real problem, but that requires a troll to actually submit a take down notice. It doesn't seem like anyone did, in this case. It's a problem but not relevant to this particular situation.