Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For

from the bad-defendants-make-bad-law dept

First things first: Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust & safety, all while prioritizing growth metrics over user safety. If you’ve been reading Techdirt for any length of time, you know we’ve been critical of the company for years. Mark Zuckerberg deserves zero benefit of the doubt.

So when a New Mexico jury ordered Meta to pay $375 million on Tuesday for “enabling child exploitation” on its platforms, and a California jury found Meta and YouTube liable for designing addictive products that supposedly harmed a young user on Wednesday, awarding $6 million in total damages, the reaction from a lot of people was essentially: good, screw ’em, they deserve it.

And on a visceral, emotional level? Sure. Meta deserves to feel bad. Zuckerberg deserves to feel bad.

But if you care about the internet — if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things — these two verdicts should scare the hell out of you. Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.

Let me explain.

The “Design” Theory That Ate Section 230

For years, Section 230 has served as the legal backbone of the internet. If you’re a regular Techdirt reader, you know this. But in case you’re not familiar, here’s the short version: it says that if a user posts something on a website, the website can’t be sued for that user’s content. The person who created the content is liable for it, not the platform that hosted it. That’s it. That’s the core of it. It serves one key purpose: put the liability on the party who actually does the violative action. It applies to every website and every user of every website, from Meta down to the smallest forum or blog with a comments section or person who retweets or sends an email.

Plaintiffs’ lawyers have been trying to get around Section 230 for years, and these two cases represent them finally finding a formula that works: don’t sue over the content on the platform. Sue over the design of the platform itself. Argue that features like infinite scroll, autoplay, algorithmic recommendations, and notification systems are “product design” choices that are addictive and harmful, separate and apart from whatever content flows through them.

The trial judge in the California case bought this argument, ruling that because the claims were about “product design and other non-speech issues,” Section 230 didn’t apply. The New Mexico court reached a similar conclusion. Both cases then went to trial.

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

As Eric Goldman pointed out in his response to the verdicts:

The lower court rejected Section 230’s application to large parts of the plaintiffs’ case, holding that the claims sought to impose liability on how social media services configured their offerings and not third-party content. But social media’s offerings consist of third-party content, and the configurations were publishers’ editorial decisions about how to present it. So the line between first-party “design” choices and publication decisions about third-party content seems illusory to me.

If every editorial decision about how to present third-party content is now a “design choice” subject to product liability, Section 230 protects effectively nothing. Every website makes decisions about how to display user content. Every search engine ranks results. Every email provider filters spam. Every forum has a sorting algorithm, even if it’s just “newest first.” All of those are “design choices” that could, theoretically, be blamed for some downstream harm.

The whole point of Section 230 was to keep platforms from being held liable for harms that flow from user-generated content. The “design” theory accomplishes exactly what 230 was meant to prevent — it just uses different words to get there.

Bad defendants make bad law. Meta is unsympathetic. It’s understandable why they get so much hate. It’s understandable why people (including those on juries) are willing to accept legal theories against them that would be obviously problematic if applied to anyone else. But legal precedent doesn’t care about your feelings toward the defendant. What works against Meta works against everyone.

The Return Of Stratton Oakmont

If this all sounds familiar, it should. This is almost exactly the legal landscape that existed before Section 230 was passed in 1996, and the reason Congress felt it needed to act.

In the early 1990s, Prodigy ran an online service with message boards and made the decision to moderate them to create a more “family-friendly” environment. In the resulting lawsuit, Stratton Oakmont v. Prodigy, the court ruled that because Prodigy had made editorial choices about what to allow, it was acting as a publisher and could therefore be held liable for everything users posted that it failed to catch.

The perverse incentive was obvious: moderate, and you’re on the hook for everything you miss. Don’t moderate at all, and you’re safer. Congress recognized that this was insane — it punished companies for trying to do the right thing — and passed Section 230 to fix it. The law explicitly said that platforms could moderate content without being treated as the publisher or speaker of that content. And, as multiple courts rightly decided, this was designed to apply to all publisher activity of a platform — every editorial decision, every way to display content. The whole point was to allow online services and users to feel free to make decisions regarding other people’s content, including how to display it, without facing liability for that content.

And a critical but often overlooked function of Section 230 is that it provides a procedural shield: it lets platforms get baseless lawsuits dismissed early, before the ruinous costs of discovery and trial.

These two verdicts effectively bring us back to Stratton Oakmont territory through the back door. By recharacterizing platform liability as “product design” liability rather than content liability, plaintiffs’ lawyers have found a way to nullify Section 230 without anyone having to vote to repeal it. Every design decision — moderation algorithms, recommendation systems, notification settings, even the order in which posts appear — can now be characterized by some lawyer as a “defective product” rather than an editorial choice about third-party content.

Except this time, instead of people being horrified by the implications, they’re cheering.

The Trial Is the Punishment

The dollar amounts in these cases tell an interesting story if you pay attention. The California jury awarded $6 million total — $4.2 million from Meta, $1.8 million from YouTube. For companies that bring in tens of billions in quarterly revenue, that’s effectively nothing. It’s not even a slap on the wrist. Meta will barely notice.

But that’s exactly the problem. The real cost here is the process. The California trial lasted six weeks. The New Mexico trial lasted nearly seven. Both involved extensive discovery, depositions of top executives including Zuckerberg himself, production of enormous volumes of internal documents, and armies of lawyers on both sides.

Meta can afford that. Google can afford that. You know who can’t? Basically everyone else who runs a platform where users post things.

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

The California case is just the first of multiple “bellwether” trials scheduled in the near future. Hundreds of federal cases are lined up behind those. There are over 1,600 plaintiffs in the consolidated California litigation alone. As Goldman noted:

Together, these rulings indicate that juries are willing to impose major liability on social media providers based on claims of social media addiction. That liability exposure jeopardizes the entire social media industry. There are thousands of other plaintiffs with pending claims; and with potentially millions of dollars at stake for each victim, many more will emerge. The total amount of damages at issue could be many tens of billions of dollars.

This is the Stratton Oakmont problem all over again, but worse. At least in 1995, only companies that moderated faced liability. Now, any company that makes any “design choice” about how to present user content — which is to say, literally every platform on the internet — is potentially on the hook if any harm comes to any user which some lawyer can claim was because they used that service. The lawsuit becomes a weapon regardless of outcome, because the cost of defending yourself is ruinous for anyone who isn’t a trillion-dollar company.

The Encryption Problem: Where “Design Liability” Leads

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors — choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The Causation Problem

We also need to talk about the actual evidence of harm in these cases, because it’s thinner than most people realize.

The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9, and that her social media use caused depression, self-harm, body dysmorphic disorder, and social phobia. Those are real and serious harms that genuinely happened to a real person, and no one should minimize her suffering.

But as Goldman noted:

KGM’s life was full of trauma. The social media defendants argued that the harms she suffered were due to that trauma and not her social media usage. (Indeed, there was some evidence that social media helped KGM cope with her trauma). It is highly likely that most or all of the other plaintiffs in the social media addiction cases have sources of trauma in their lives that might negate the responsibility of social media.

The jury was asked whether the companies’ negligence was “a substantial factor” in causing harm. Not the factor. Not the primary factor. A substantial factor.

This standard is doing enormous work here, and nobody in the coverage seems to be paying attention to it. In most product liability cases, causation is relatively straightforward: the car’s brakes failed, the car crashed, the plaintiff was injured. You can trace a mechanical chain of events. There needs to be a clear causal chain between the product and the harm.

But what’s the equivalent chain here? The plaintiff scrolled Instagram, saw content that made her feel bad about her body, developed body dysmorphic disorder? Which content? Which scroll session? How do you isolate the “design” from the specific posts she saw, the comments she read, the accounts she followed?

With a standard that loose, applied to a teenager with multiple documented sources of trauma in her life, how do you disentangle what was caused by social media and what was caused by everything else? The honest answer is: you can’t. And neither could the jury, not with any scientific rigor. They made a judgment call based on vibes and sympathy — which is what juries do, but it’s a terrifying foundation for reshaping internet law.

The research on social media’s causal relationship to teen mental health problems is incredibly weak. Over and over and over again researchers have tried to find a causal link. And failed. Every time.

Lots of people (including related to both these cases) keep comparing social media to things like cigarettes or lead paint. But, as we’ve discussed, that’s a horrible comparison. Cigarettes cause cancer regardless of what else is happening in a smoker’s life. Lead paint causes neurological damage regardless of a child’s home environment. Social media is not like that. The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle.

And, also, neither cigarettes nor lead paint are speech. The issues involving social media are all about speech. And yes, speech can be powerful. It can both delight and offend. It can make people feel wonderful or horrible. But we protect speech, in part, because it’s so powerful.

But a jury doesn’t need to untangle those factors. A jury just needs to feel that a sympathetic plaintiff was harmed and that a deeply unsympathetic defendant probably had something to do with it. And when the defendant is Mark Zuckerberg, that’s a very easy emotional call to make. Which is exactly why this is so dangerous as precedent. If “a substantial factor” is the standard, and the defendant’s internal documents showing employees discussing concerns about safety count as proof of wrongdoing, then essentially any plaintiff who used social media and experienced mental health difficulties has a viable lawsuit. Multiply that by every teenager in America and you start to see the scale of the problem.

Then recognize that this applies to everything on the internet, not just the companies you hate. A Discord server for a gaming community uses a bot to surface active conversations — design choice. A small forum for chronic illness patients sends email notifications when someone replies to your post — design choice. A blog lets readers comment on articles and notifies writers when they do — design choice. A local news site has a comments section that displays newest-first — design choice. Every one of these could theoretically be characterized as “features that increase engagement” and therefore potential vectors of liability.

And the claims of “addiction” are even worse. As we’ve discussed, studies show very little support for the idea that “social media addiction” is a real thing, but many people believe it is. But it’s not difficult for a lawyer to turn anything that makes people want to use a service more into a claimed “addictive” feature. Oh, that forum has added gifs? That makes people use it more! Sue!

Yes, some of these may sound crazy, but lawyers are going to start suing everyone, and the sites you like are going to be doing everything they can to appease them, which will involve making services way worse.

Who’s Not in the Room

There’s also something that got zero attention in either trial: the people for whom social media is genuinely, meaningfully beneficial.

Goldman’s observation on this deserves to be read carefully:

Due to the legal pressure from the jury verdicts and the enacted and pending legislation, the social media industry faces existential legal liability and inevitably will need to reconfigure their core offerings if they can’t get broad-based relief on appeal. While any reconfiguration of social media offerings may help some victims, the changes will almost certainly harm many other communities that rely upon and derive important benefits from social media today. Those other communities didn’t have any voice in the trial; and their voices are at risk of being silenced on social media as well.

LGBTQ+ teenagers in hostile communities who find support and connection online. People with rare diseases who find communities of fellow patients. Activists in authoritarian countries who use social media to organize. Artists and creators who built careers on these platforms. People with disabilities who rely on social media as their primary social outlet. None of them were in that courtroom. None of them had a voice in the proceedings that will reshape the platforms they depend on.

When platforms are forced to “reconfigure their core offerings” to reduce liability — which could mean anything from removing algorithmic recommendations to eliminating features that enable connection and discovery — the costs won’t fall evenly. Meta and Google will survive. They’ll make their products blander, less useful, and more locked down. It’s the users who relied on those features who will pay the price.

Bad Defendants Make Bad Law

Both Meta and YouTube have said they will appeal, and they have plausible grounds. The product liability theory applied to what are fundamentally speech platforms raises serious First Amendment questions. The Section 230 issue — whether “design choices” about presenting third-party content are really just editorial decisions that 230 was designed to protect — will almost certainly get a serious look from appellate courts. The causation questions are genuinely unresolved.

But appeals take years. In the meantime, every plaintiffs’ attorney in America now has a proven template for suing any social media platform. The bellwether structure means more trials are already scheduled — the next California state court one is in July, with a similar federal case starting in June. The litigation flood has started, and 230’s procedural protection — the ability to get these cases dismissed before they become multi-million-dollar ordeals — has already been neutralized.

Goldman is right to frame this as existential:

There are thousands of other plaintiffs with pending claims; and with potentially millions of dollars at stake for each victim, many more will emerge. The total amount of damages at issue could be many tens of billions of dollars.

None of this means the harms kids face don’t deserve serious attention. They do. There are ways to address legitimate concerns about teen mental health that don’t require treating every editorial decision about third-party content as a defective product — but they involve hard, unglamorous work, like actually funding mental health care for young people.

But suing Meta is more fun!

Meta can absorb tens of billions. But this legal theory doesn’t apply only to Meta. It applies to every platform that makes “design choices” about how to present content — which again, is every platform. The next wave of lawsuits won’t just target trillion-dollar companies. They’ll target anyone with a recommendation algorithm, a notification system, or an infinite scroll feature, which in 2025 is basically everyone.

We got Section 230 because Congress looked at the Stratton Oakmont decision and realized the legal system had created a set of incentives that would destroy the open internet. The incentive now is arguably worse: not just “don’t moderate” but “don’t build anything that makes user-generated content engaging, discoverable, or easy to access, because if someone is harmed by that content, the way you presented it makes you liable.”

I get why people are cheering. Meta is a bad company that has made bad choices and treated its users badly. Zuckerberg has earned most of the contempt coming his way. Kids have been genuinely harmed, and the instinct to want someone powerful to be held accountable is about as human as it gets.

But bad defendants make bad law. And the law being made here — that platforms are liable for the “design” of how they present the third-party content that is their entire reason for existing — will not stay confined to companies you don’t like. It will be used against every website, every app, every platform, every small operator who ever made a choice about how to display user-generated content. It will make Section 230 a dead letter without anyone having to vote to repeal it. It will create a legal environment where only the largest companies can afford to operate, because only they can absorb the cost of endless litigation.

What you won’t get out of this is anything approaching “accountability.” You’ll get overly lawyered-up systems that prevent you from doing useful things online, and eventually the end of the open internet — cheered on by people who think they’re punishing a bully but are actually handing the bully’s biggest competitors a death sentence.

Filed Under: , , , , ,
Companies: google, meta, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For”

Subscribe: RSS Leave a comment
104 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

First: “That liability exposure jeopardizes the entire social media industry. “ (per Goldman)

Good. The entire social media “industry” should be burned to the ground. The tiny amount of good that it does in the world can be and will be replaced by other mechanisms if that happens; the enormous amount of harm that it does will be (at least temporarily) eliminated.

Second: I’m not in the least bit concerned about the implications of this verdict. Sensible people did just fine before Section 230 came along and we’ll do just fine if it goes away. The reckless and careless won’t, of course, but that’s on them. I am concerned about the erosion of end-to-end encryption because it’s one of the rather slender threads still providing some modicum of security and privacy. And if we lose that, it will be the fault of the social media “industry” for pissing in the pool — yet another example of the damage they’ve done.

Anonymous Coward says:

Re:

That’s weird, because I’m also not in the least bit concerned about the implications of this verdict. Because sensible people did just fine before end-to-end encryption came along and we’ll do just fine if it goes away. The reckless and careless won’t, of course, but that’s on them.

Just sounds like you have something to hide. /s

Anonymous Coward says:

Re: Re:

in the 80’s people did not sue because they read something that someone else said on a web page, they did not sue because you could keep on scrolling.

In australia within 24 hours the new sites where saying the ban should applies to all apps that do things related to the language used in the law suit

Adam Gordon says:

Re: Re: I'm not worried either...

There are a couple of really bad arguments in Masnick’s post. First, about the paint-drying example. Flip that around. Imagine a social media site that was nothing but lowest-common denominator slop (say ExTwitter). Every post deliberately engineered to enrage and serve as click bait (or in X’s case, masturbatory fodder). But between every post, you have to click Next, click through 5 unskippable pages of ads, AND two captchas, to see the next slop post–every old-school bad blog design in the book. That wouldn’t work either. It’s the COMBINATION of product design AND allowing garbage hate posts & other addictive content (that shouldn’t be allowed) that is what is being found legally liable. That judgement is correct.

Second, victim blaming is a bad, desperate look for the intellectual side losing the argument among the American people.

I have to credit Matt Stoller for his take: “But legal elites have a reverence for a certain corporate-friendly version of the mid-20th century First Amendment. They fear the government, and only the government. To them, it’s always 1971, and the New York Times is always facing Nixon over the Pentagon Papers. The result is many free speech advocates have adopted a deeply immoral and corporatized vision of speech…Since then, corporations have used that provision of the Constitution to block labor law, election rules, and regulation of pornography…This strange view of free speech has created a disconnect of libertarian legal elites towards basic morality.”

But to reiterate the point above, I remember a world where not everyone with an opinion could post it in a public forum, to humanity’s detriment. The world was just fine before the Internet & Section 230. Unfettered, user-generated content on the Internet will go down as one of mankind’s greatest mistakes.

Dang says:

Re: Re: Re:

Doesn’t the “design” versus “content” argument break down when money is involved?
What I mean is that if a product is designed to make money off of third party generated content AND by design the most money is made by monetizing to the greatest extent harmful material, then from a social, political and legal perspective doesn’t it make sense for the design and by extension the owner operator of that design be held accountable for the harm that is caused by the designs

Anonymous Coward says:

Re: Re: Re:2

I think this is a hugely important point. The websites simply are not putting everything out there that someone wants to post and avoiding moderation. They are promoting posts known to be harmful. The idea that we can’t regulate or punish that because it will be difficult or could lead to overreach is just slippery slope slop.

Anonymous Coward says:

Re: Re: Re:2

You get two choices:

Deal with bad faith actors and gaslighting

OR

You lose freedom of speech worldwide.

If you don’t see a lot of, likely rule of law sidestepping, retaliation against rich people for these types of product design (and numerous other ‘gaming the system’ behaviors in the financial, agricultural, medical and real estate sectors among others) then 1A and user generated content are going to be blown the fuck up.

If, and when, that happens, distribution of the most engaging content will just stop using the internet and tech will roll back to the era of USB drives and CDs/DVDs and so on.

If you want the First Amendment to remain there must be a way to cause catastrophic levels of financial harm to trillionaire individuals and businesses to force behavioral changes.

Daniel James says:

Re: Re: Re:

That you have to reference the fact that platforms allow garbage content is why this theory should not be legally viable. Displaying garbage content is an editorial choice. Platforms have First Amendment rights to publish garbage content as long as it’s not obscene or illegal—that has been black letter law for decades. And they are not liable for the editorial choice to display garbage content created by others under section 230. If you can’t disentangle the design choice from the content, it should not be the basis of a tort claim.

I also urge you to think that garbage content is subjective. I think X is a cesspool radicalization machine, and I wish they would do better at removing all the bots spreading Nazi propaganda. But the same legal theory that would let me sue X for getting people addicted to racist ragebating would also allow someone to sue them for getting kids addicted to “trans propaganda.” We protect speech we find unpalatable because it is what ensures our speech is protected when it’s unpopular. It doesn’t always feel good, but it’s a necessary part of democracy.

Bad arguments says:

Re: Re:

You understand that you are people, right? Like doorknobs who wear snide tshirts rating people as “1-star.” Do they suffer from sado-masochism or low self-esteem? It’s an odd and self-centered take to dismiss everyone but one’s self as “shit.” Clearly you despise social media, that doesn’t make your opinion fact.

Bad arguments says:

Re: Re:

You understand that you are people, corre t? Like smarmy folks who wear snide t-shirts rating people as “1-star.” Do they suffer from sado-masochism or low self-esteem? It’s an odd and self-centered take to dismiss everyone else except one’s self as “shit.” Clearly you despise social media, but that doesn’t make your opinion fact.

Bad arguments says:

Re: Re:

You understand that you are people, corre t? Like smarmy folks who wear snide t-shirts rating people as “1-star.” Do they suffer from sado-masochism or low self-esteem? It’s an odd and self-centered take to dismiss everyone else except one’s self as “shit.” Clearly you despise social media, but that doesn’t make your opinion that all social media is bad a fact. Moreover, this decision is NOT going to end social media, in point of fact it will just buttress the current social media monopolies that you decry as ruining the world.

realitymonster (profile) says:

Re: Re: Re:

Your mindset is ‘harm never matters, we should never try to make things better if speech is adjacent to the topic, and by the way, I don’t know what an algorithm is.’

The relentless refusal to acknowledge that real harm is done to actual people by companies like Facebook is baffling.

Here’s something to think about: the first amendment isn’t the only way in the world to protect free speech. Lots of countries do it. Section 230 isn’t the only way in the world to write a law that protects online speech, so maybe figure that one out too. If your government sucks (it does, we know) maybe fix that first. Absolutely myopic, inflexible tunnel vision. Do better.

This comment has been deemed insightful by the community.
Dister (profile) says:

“But that means no company is going to allow anyone to raise concerns ever again.”

The catastrophizing of this is a bit much. I get that, in some sense, this makes a platform liable for the content it hosts, but this is not suddenly a world where every single design decision will result in a lawsuit. That was not true for car design after Ford Pintos started blowing up, and it won’t be true here. Even with this decision, a platform is not liable for a person positing something defamatory or otherwise illegal. What they are liable for is, given the existence of unsafe material, if they design their product in a way that causes harm, then they are potentially liable. That is a very different legal assertion.

I also am unconvinced by the “procedural safeguard” nonsense. A lawsuit is a lawsuit, regardless of the grounds and the defenses. Bringing a suit against a corporation that loses because of 230 is STILL BRINGING A LAWSUIT. Having an additional defense is not “procedural” it is legal. You still need to prosecute the suit. There are actual procedural mechanisms to end a suit early and throw out frivolous cases (Rule 12(b)(6) motion to dismiss and state equivalents, and summary judgement, to name two primary ones). But a LEGAL defense is not PROCEDURAL device, the legal defense still needs to be asserted and litigated.

I dunno man, I am sympathetic to the “23 words that built the internet” thing, but these absolutes about how 230 is now effectively dead and no one will ever moderate ever again, and reddit and bluesky and mastodon and snap and tik tok are now going to die because they can’t moderate is a little dramatic. I think the effects here are much narrower than you are protraying them.

Rico R. (profile) says:

Re:

I get that, in some sense, this makes a platform liable for the content it hosts, but this is not suddenly a world where every single design decision will result in a lawsuit.

Sure, not every decision made will result in a lawsuit, but if discussion on weighing the risks and benefits of making one decision over another can be recontextualized as, “See? They knew there could be a risk, and they did it anyway,” that’s going to chill any future discussion. Any such discussion carries the risk that if any affected party decides to sue, that conversation can be brought up as evidence, regardless of whether they actually acted with malicious intent.

…a platform is not liable for a person positing something defamatory or otherwise illegal. What they are liable for is, given the existence of unsafe material, if they design their product in a way that causes harm, then they are potentially liable.

I think this underestimates the effects of using any sort of algorithm to sort content. I guarantee you your Facebook feed is different from mine, and it’s not just because we know different people. We have different interests and engage with different kinds of content. And when you engage with certain kinds of content, the algorithm picks that up and shows you more of that content in the future. Stop interacting with that kind of content, and you’ll slowly but surely see less and then nothing at all.

Also, remember that what could be considered “harmful” to you is not necessarily harmful to me, and vice versa. It’s not as simple as saying, “The algorithm was designed in a way that harmed this user, therefore you’re at fault for harming them.” As Mike pointed out, the algorithm, design features, etc., are dependent on both the content presented and how the person seeing it interacts with it.

A lawsuit is a lawsuit, regardless of the grounds and the defenses. Bringing a suit against a corporation that loses because of 230 is STILL BRINGING A LAWSUIT. Having an additional defense is not “procedural” it is legal. You still need to prosecute the suit.

A lawsuit is just a lawsuit when it’s against giants like Meta, YouTube, etc. But what about the little guys? What about new startups? Heck, what about Techdirt’s comment section? Lawsuits can devastate new and small platforms. Section 230 can be pointed to as a law that says, “We can’t be held liable for another user’s content posted to our site”, and the lawsuit is dismissed at the motion to dismiss. Almost any other grounds to win the case (First Amendment, lack of causal connection to the case, etc.) can only be dispositive at a later stage in litigation.

That means you have to go through discovery, and legal fees quickly skyrocket. And even if they win, there’s no guarantee that they’ll survive. We’re talking about the difference between legal fees in the hundreds of dollars and legal fees in the tens of thousands of dollars (if not more). And that’s not even accounting for the cost of appeals.

You don’t even have to take my word for it. Granted, the circumstances are different, but consider Veoh, a video-sharing site that could have easily rivaled YouTube. Veoh was sued for copyright infringement on their platform. Section 230 doesn’t apply to IP laws, but the DMCA does. Crucially, though, it’s not as clear-cut as “We can’t be held liable” like Section 230. It requires going through discovery. YouTube went through a similar legal case, but they were bought out by Google at that point, so the financial burden wasn’t as great as Veoh, which remained independent. YouTube and Veoh both won their lawsuits, but the cost of litigation proved too much for Veoh, and so it had to shut down. YouTube is still going.

So, whether you categorize it as just procedural or a legal defense, the difference is the cost involved. For tech giants like YouTube and Meta, it’s pocket change. For new startups and small operations, the difference is whether they can afford to continue operating after the litigation has concluded, assuming they’re victorious in court.

Dister (profile) says:

Re: Re:

“Sure, not every decision made will result in a lawsuit, but if discussion on weighing the risks and benefits of making one decision over another can be recontextualized as, “See? They knew there could be a risk, and they did it anyway,” that’s going to chill any future discussion.”

Not super convinced by this. Most companies, even small ones, have legal counsel and some understanding of how to document their design decisions to reduce liability. And even with a case like this, the outcome was not simply that someone raised a comment that the platform might be unsafe or that safety could be better, but rather, from my understanding, was that the way it was discussed and the decisions were made showed that the issues were ignored or even preferred in order to serve business interests. Again, there are ways to make these decisions and to document them to show reasonable efforts to mitigate safety issues.

“As Mike pointed out, the algorithm, design features, etc., are dependent on both the content presented and how the person seeing it interacts with it.”

From my view, it is yes and no. Mike seems to be saying that the design features (infinite scroll, notifications, algorithm, etc.) alone cannot be blamed for harm unless it is serving content, in which case the content is to blame. While it is true that content sufficient to cause the harm is needed in order for the design features to play a role in causing the harm. However, I would say that it is a bit silly to say that the design features are implemented in a content agnostic way, and it is solely via the content that the features become harmful. We need to live in the real world where, without blaming any particular piece of content (remember, the judge said all claims about the content itself could not proceed in the case), harmful content does exist. It just does. On instragram. On facebook. On youtube. On X. If the design features are constructed in a way to “weaponize” that harmful content, then I think you would be reasonable in saying that those design features are bad, regardless of any particular item of content. I dunno, this argument just feels like saying “well the conspirator didn’t rob the bank, he just sent a volunteer to do it for him, so he can’t be guilty of any crime.”

“So, whether you categorize it as just procedural or a legal defense, the difference is the cost involved. For tech giants like YouTube and Meta, it’s pocket change. For new startups and small operations, the difference is whether they can afford to continue operating after the litigation has concluded, assuming they’re victorious in court.”

True. I was being unnecessarily pedantic. But I will say, people always seem to look at only one side of this equation. Yes litigation is costly for defendants. But it is also costly for plaintiffs. It takes up a ton of time and a ton of money to bring a suit. Sure you could bring a frivolous case, and maybe it survives a motion to dismiss, but each submission to the court will be tens of thousands of dollars, even for the plaintiff. Discover is also expensive for the plaintiff. You could say that maybe these plaintiffs will get lawyers on contingency fees so they only pay if they win, but the lawyers will then only take the case if they are likely to succeed, otherwise they burn a bunch of money.

This why I say the catastrophizing of this is a bit over the top. Is there increased exposure? Yes. Is every tom, dick and harry going to come out of the woodwork to file a lawsuit? Probably only the ones that have some reasonable expectation of success.

Last point is that Meta and Youtube will probably face a deluge of these of these cases now that they have already been found liable so making the case is much easier. But that does not extend to other platforms like Bluesky or Mastodon or Reddit or whoever else.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

The “procedural shield” is against financially ruinous but legally groundless lawsuits, because motions to dismiss on Section 230 grounds are decided before discovery and trial — the most expensive parts of a lawsuit. Section 230, in other words, protects platforms from being bankrupted by well-funded opponents who can afford to launch meritless lawsuits just to force the platform to bear the cost of a legal defense.

Dister (profile) says:

Re: Re:

Yeah, I mentioned above, I was probably being a bit silly in being pedantic on this point. I get the idea. I just think it’s important to remember that the same motion to dismiss exists for all civil causes of action. Section 230 provides a relative bright line rule that makes it easier to get a case thrown out at that stage. But the great thing about common law is that the courts develop tests and standards that can help define a good versus bad case.

This comment has been deemed insightful by the community.
Commenter #5759 (profile) says:

Re:

I get that, in some sense, this makes a platform liable for the content it hosts, but this is not suddenly a world where every single design decision will result in a lawsuit. That was not true for car design after Ford Pintos started blowing up, and it won’t be true here.

You have a point, but perhaps not as strong a point as you’re making it out to be.

In the case of the Pinto, both the nature of the harm and the cause of the harm were well-defined. We knew that Pintos tended to explode when rear-ended, and it was pretty easy to figure out that that happened because of where the Pinto’s gas tank was positioned, among other factors.

But when the harm is “I spend too many hours on this website and I feel terrible afterwards”, everything is much more muddled.

  • How do you prove that the website caused the harm? Maybe the plaintiff would have felt just as bad without the website.
  • If the website did cause harm, was the harm due to the website’s design, or to its content?
  • If the harm was due to design, which features of the design contributed to the harm, and to what degree?
  • What if design features that have negative effects for some users have positive effects for others? How should those be balanced?

None of those questions are easy to answer.

If these cases hold, I personally expect a lot of lawsuits. Anyone who uses a black-box algorithm to recommend or rank content, outside of traditional search engines, will be accused of being “addictive”. We’ll see “binge-watching” lawsuits against streaming services. It’ll be nuts.

(As an aside: now that I’m reading the Wikipedia article, and this cited source from the article, even the case of the Pinto wasn’t quite that clear-cut. Yes, it did have more of a problem with rear-end collision fires than other models. But the position of the gas tank was fairly standard for American cars of that era, and the Pinto’s overall fatality rate was entirely unremarkable for a subcompact. Few people today think of the ’70s VW Beetle as a “deathtrap”, but when you look at the number of deaths per million cars on the road, it was worse than a Pinto.)

Dister (profile) says:

Re: Re:

I think there will be a lot of cases against Meta and Youtube in the near future (and probably some others like X). But my generally feeling about all of this is three-fold:
1) I think there is a world where a platform could deliberately or negligently use its product design to cause harm. An example would be if Elon Musk decided to adjust the X algorithm to inundate certain individuals or groups with harmful and egregious posts. This feels to me like the kind of example where there is a bad act likely to cause harm that is not dependent on any particular item of content, and I think it is reasonable to say this kind of action should be subject to legal accountability.
2) Drawing a line from an act to a harm is the entire purpose of civil law. Indeed, courts, at a fundamental level, exist for primarily two reasons – criminal accountability, and resolving civil disputes regarding whether an act caused an injury. There are many other circumstances where showing that causation is also hard, and yet we still endeavor to find to the truth of it. “Hard,” I think, is a bad a excuse to avoid answering the question.
3) Litigation against platforms in not inherently bad. It isn’t inherently good either, but the tone seems to be “heavens to betsy, however will we survive if a company is sued.” I said this above in another response, but tons of industries are heavy regulated and thus subject to lawsuits. They adjust. Everyone survives. In some cases, it even creates a healthier market because users can trust the product and that if something goes wrong, they aren’t hung out to dry, and because overall safety increases. You could make the same argument about the FDA and pharmaceuticals. FDA clearance takes years and costs millions of dollars. Biotech and pharma seem to make it work, and we can all be assured (at least in theory) that there is some degree of safety in an FDA cleared product.

Overall, my point isn’t really that this particular case was correct, or that some of what Mike says won’t be true. I think it will be tough for online platforms, at least for a while. But does that means “no company is going to allow anyone to raise concerns ever again” and “You’ll get overly lawyered-up systems that prevent you from doing useful things online, and eventually the end of the open internet.” Maybe the facts of this case were bad and the jury should have gone the other way, or maybe the judge needed to better delineate between design and content such that this case should have been dismissed. Questions I guess we will need to wait for the appeal for answers on. But the idea that the only possible result is no less than the death of the open internet seems like a bit much. Algorithmic feeds exist in other countries that don’t have section 230. Why are we special.

MG says:

Re: Re: Re:

We don’t have 230 in Europe and the country I live in, but everyone is watching and speech against social media/video games/smartphones/screentime is more and more popular. More and more people are for restrictions and nobody seems to be taking sensorship under consideration. Most of Mainstream media in my country are reposting what the US media are saying, translated, no research, no second thoughts. So no, 230 we have not, but governments and politicians dreaming of controlled internet? That, we have.

Anonymous Coward says:

Re: Bad argument

“…but this is not suddenly a world where every single design decision will result in a lawsuit. … Even with this decision, a platform is not liable for a person positing something defamatory or otherwise illegal. What they are liable for is, given the existence of unsafe material, if they design their product in a way that causes harm, then they are potentially liable.”

Which means that the order in which replies are shown on this site was a design choice which made me see your bad argument, which caused me mental anguish.

This is effectively a one strike and you’re out standard. If anyone sees one piece of harmful content, there is now legal precedent to blame that on the site’s “design” as if the site had to know what would harm that specific plaintiff and protect them.

Anonymous Coward says:

Re: Bad argument

“…but this is not suddenly a world where every single design decision will result in a lawsuit. … Even with this decision, a platform is not liable for a person positing something defamatory or otherwise illegal. What they are liable for is, given the existence of unsafe material, if they design their product in a way that causes harm, then they are potentially liable.”

Which means that the order in which replies are shown on this site was a design choice which made me see your bad argument, which caused me mental anguish.

This is effectively a one strike and you’re out standard. If anyone sees one piece of harmful content, there is now legal precedent to blame that on the site’s “design” as if the site had to know what would harm that specific plaintiff and protect them.

Anonymous Coward says:

Sadly, I feel this was bound to happen.
If Meta & Google had good actors, and didn’t do stuff like this, this wouldn’t happen.

I also can’t blame them, why?

Unfortunately, while I feel their should be a way to make it work without section 230, I feel either the plaintiffs either didn’t want to get rid of 230 (most likely) or didn’t think this through.

Ninja (profile) says:

I’m not sure I agree here. I mean, there are measures that Meta and other companies can adopt to make their products more or less addicting (ie: infinite scroll). They can demonetize content involving children exploitation, they can even use automation successfully here while providing objective and useful ways for those who are wrongfully filtered out. There are plenty of ways platforms can make things better and they actively choose to go the other way.

So I don’t think it’s that much of a catastrophe. Sure we need to be very careful with regulations and legal tools but the platforms have shown many, many times they cannot be trusted to generally abide by good practices. So we have to start somewhere. I’m willing to see where this will lead. Because inaction has already produced very bad results.

This comment has been deemed insightful by the community.
Des says:

Re:

What happens when suddenly just having queer content or just any content non aligned to the government is seen as “harmful”?

What happens when hundreds of religious zealots use “Think of the children” to attack sites that host secular educational information.

They don’t have to win, they just have to make the company lose money so much it discourages them to even exist.

“But they aren’t talking about speech?”

They can just say that they algorithms is dicriminatory and promotes lifestyles against their faith. Boom, now they have a ground to sue.

See the problem here?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re:

I’m not sure I agree here. I mean, there are measures that Meta and other companies can adopt to make their products more or less addicting (ie: infinite scroll)

Oh absolutely, there’s just one tiny problem…

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

When internal discussions by staff about problems, solutions and trade-offs are used against the company then the incentive is to not have those discussions, and if you can’t even talk about problems solving them becomes effectively a non-starter.

Adrian Lopez (profile) says:

Re:

What is it about “infinite scroll” that makes it so dangerous? Is having people click on a “load more” button less “addictive?” Is a “load more” button more harmful than clicking on page numbers? Should a feed cut off after a certain point, even if users would like to see older content? I don’t understand what’s so evil about “infinite” scrolling.

Also, was Meta really monetizing content involving child exploitation, like doing that especially? I find it hard to believe.

Dister (profile) says:

Re: Re:

From my perspective, people seems to be conflating necessary and sufficient conditions. My understanding of this case is not that “infinite scroll is addictive and causes anxiety.” It is that infinite scroll is one of many features that contributed to the addictive qualities, were known to contribute to the addictive qualities, and were implemented to drive engagement (i.e. to take advantage of those qualities). It alone would not be sufficient, but it is, or at least can be, necessary in combination with other features. Whether you agree or disagree with the outcome, I think it is important to understand what the case actually stands for, which is not that infinite scroll, on its own, is evil.

That One Guy (profile) says:

'Not the briar patch, anything but that...'

The kicker about people cheering this on because fuck Meta/Google/TikTok is that they’re cheering on those companies winning.

Sure they lost the trials but as the article rightly notes several times if this is allowed to stand Meta and Google can afford to pay the resulting legal fees. Their competitors whether current or potential on the other hand can not, which means if anything these rulings just cemented those company’s positions in the marketplace by ensuring that no-one can possibly unseat them on top of threatening encryption and heavily incentivizing companies against even talking about potential problems.

This comment has been deemed insightful by the community.
ruraldoc (profile) says:

False advertising?

Although I’m not sure this case really shows it, it does seem plausible that a company-by-company (or at least algorithm-by-algorithm) case can be made for harm based on intentional “design choices”. You seem to conflate every algorithmic decision with window dressing akin to the size of windows or endless scrolls, etc. That might be correct for the decisions made by front-end developers (humans), but I think less so for decisions made by algorithms (sorry for the anthropomorphizing, but that’s where we are with these things, lately).

But what if the argument from these plaintiffs was not “I was a depressed teen and saw things that made it worse” and rather “the algorithm learned that I was depressed and was specifically designed to show me things calculated to make that worse”. I know that’s not being alleged here (and would be really hard to allege since algorithm details are usually sacred somehow), but it’s entirely plausible to be true.

Your example of the “paint drying” videos is good, but is of course modeled to leave out the fault of the curator. Does this depiction change if we view this as a museum, for example, that has decided to curate a room of violent BDSM gay porn? Legal, surely, at least amongst consenting adults. Also something many adults wouldn’t want shown to their kids. Now suppose that room was in an enclosed space in the museum but there was a series of primary-color signs, with smiling cartoon animals and balloons, inviting children to come visit this space and to see the new special Kids Exhibit! and maybe some imagery to make it extra appealing to Christian parents.

The content itself is still the content. Legal, appropriate in the right setting, etc. But the “platform” decisions about to whom to serve it could certainly be misleading? inappropriate? illegal? dangerous? Not sure which adjective is right, but I think it IS possible for platforms to make intentionally-bad choices regarding curation. And I think algorithmic decisions that build an FYP are a bit more like “curation” than “moderation”.

Eurydice (profile) says:

Re:

Now imagine that gay bdsm porn museum (weird example, but ok) us 18+, but parents keep bringing their under-18 children and then complaining they saw gay bdsm porn. Because all websites require you to be at least 13 to use them. There is space for family accounts in the social media case, but I think we need a little more parental responsibility here. “But her life was traumatic and her parents didnt care enough to monitor her!” you might say. Okay… how is that social medias’ fault?

Again, if our country is this concerned with teenage girls’ well-being (and as a former teenaged girl i assure you they are not) why are we not going after magazines for promoting super-thin bodies and ozempic and eating disorders? Why not go after food manufacturers for adding sugar to stuff that doesnt need it to get everyone addicted?

There is no instinctual drive to be on facebook, but are bodies make us enjoy sugar as a survival mechanism. Which is more exploitative?

Anonymous Coward says:

Re: Curation

I have to agree here. Not all algorithms are equal. Facebook both highlights and obscures posts in the name of engagement. A newspaper has editorial control over its front page, and the same basic commerce drivers, but it also has the responsibilities of a publisher. By using an algorithm that does more than just display what’s available, that chooses and pushes one thing over another to an audience of potentially billions (in FBs case) it seems disingenuous to say that there isn’t an equivalent responsibility over what gets displayed. Nobody’s saying they shouldn’t curate if they feel that elevates their value, that would be a denial of their speech, but they have to take responsiblity for that speech.

The threat of having email discussions being liabilities over possible flaws seems an a tad overblown as well. In a trial situation, nobody is going to pay much attention to such an email if the responses to that email are ‘hey – that’s right, we’d better do something about’ or even ‘ no – we have a body of independant research that suggests this is not an actually a problem’. The excerpt makes a good media headline, but the context is important. It’s the sweeping under the rug that will get you the conviction.

This comment has been deemed insightful by the community.
Azuaron says:

Hold up

I don’t wholly agree with this ruling or it’s implications–The Encryption Problem, in particular, is a terrible argument that has to die–but I really have to address this section because it’s not accurate:

The trial judge in the California case bought this argument, ruling that because the claims were about “product design and other non-speech issues,” Section 230 didn’t apply. The New Mexico court reached a similar conclusion. Both cases then went to trial.

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

Instagram has, I’m sure, thousands of videos of paint drying that, I’m also sure, have very few views. Those videos have very few views because part of Instagram’s algorithmic recommendation system is to not serve videos of paint drying to people, because the design goal of Instagram is maximum addiction and use, which would not happen if their algorithm only recommended videos of paint drying.

The scenario of “Instagram, but with videos of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems,” is the scenario we’re in now where we do have people addicted, we do have people harmed, and people are suing. Constraining Instagram to have “only” videos of paint drying is a straw man because it nearly eliminates all the design decisions that caused the harm. So, yeah, if you eliminate all that design that causes harm, the harm isn’t caused, but that’s not what anyone’s talking about.

First, however, let’s start with what Section 230 actually says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

There’s more that I believe isn’t currently relevant, but by all means look and correct me.

In every day language, what does 230 say? It’s a narrow carve out for responsibility based only on “providers are not necessarily publishers” and “providers can choose what content appears, or does not”.

Now, what are these lawsuits claiming? They claim (I’m going to speak to just Instagram here, but this applies to all the others as well):

  • That Instagram, as a system, has been specifically designed to be addictive
  • That Instagram, as a system, has been specifically designed to worsen the mental health of its users
  • That Instagram, as a system, has been specifically designed to maximize user engagement at the expense of that user
  • That children deserve additional protection–just like children get additional protection from advertisement–from hostile systems because their brains are still developing and they’re particularly vulnerable to it

None of those are content arguments, and saying, “But what if the content was paint drying?” is not relevant or helpful. People aren’t addicted to “a single Instagram video” or even “a single Instagram channel” (you can probably tell I’m not on Instagram; I’m sure they’re not called “channels”). People are addicted to the system of Instagram that feeds them content specifically tailored to maximize addiction and use, and feeds them content in a way that maximizes addiction and use. For some people that’s makeup videos, for some people that’s movie clips; the specific content is not the point. Hell, there’s probably one guy in Minnesota who’s hopelessly addicted to paint drying videos.

The problem, as with practically everything we’re dealing with in the world, is not single bad actors or individual responsibility. The problem is the system, and the system has, in fact as documented in court, been specifically designed to be addictive, to ruin people’s mental health, and to cause harm. The only way we’re going to be able to address this is by focusing on the system.

Finally, we’ve got to address this statement as well:

If every editorial decision about how to present third-party content is now a “design choice” subject to product liability, Section 230 protects effectively nothing. Every website makes decisions about how to display user content. Every search engine ranks results. Every email provider filters spam. Every forum has a sorting algorithm, even if it’s just “newest first.” All of those are “design choices” that could, theoretically, be blamed for some downstream harm.

Instagram’s targeted recommendation and addiction algorithm dark patterns are not the same thing as “newest first”. This is a slippery slope argument with no evidence that such a slope exists. If “newest first” was equally addictive and harmful, Meta would not have spent probably billions creating its various “engagement” systems. This is like saying a lawsuit against a restaurant that poisoned someone with puffer fish will lead to lawsuits against restaurants for selling salmon because they’re both fish.

Another example: we didn’t ban normal darts after we banned lawn darts, despite their similar design decisions, because of the key differences in their design decisions that resulted in clear and obvious differences in their harmful outcomes. No one’s going to get sued for “newest first” specifically because of how it’s different to the engagement algorithms.

The people and companies who make products have always been responsible for the designs of their products when those designs cause harm, from the lawn dart to the Pinto. And, we have long recognized that mental harms are harms: “Intentional infliction of emotional distress”, for instance, has been a recognized tort for decades. That we now have products that cause mental harm is new simply because we didn’t used to have the technology to create those products. But, “products have designs that cause harm” is not a new concept, and neither is “mental harms are tortable harms”.

Furthermore, “every editorial decision” is not now a “design choice”; just the design choices. Providers are–still!–not publishers or speakers of third-party content, and–still!–are not liable for moderation. Nothing in these lawsuits can be reasonably construed to impact decisions to publish–or not–specific content, which is all 230 protects. These lawsuits are, fully, not about the content, any more than California’s ban on Amazon’s dark patterns are a ban on having a web store. This lawsuits are fundamentally not about speech, because the problem is not the speech, but the system around the speech.

That some people might benefit from social media doesn’t negate the harm done to other people, nor make the company not liable for the harm it causes. No matter how many people found joy and friendship playing lawn darts with their friends, that doesn’t resurrect the kids who died, or replace the eyes that were lost. “Someone who was not harmed by lawn darts” would never be invited to a lawsuit about someone who was harmed by lawn darts; that just doesn’t make sense.

I’ve come down pretty hard, here, like I’m fully in favor of these lawsuits. While I definitely believe the nature of these social media sites is specifically designed to be harmful, and we do need a way to address that, ehhhhh, the plaintiffs in these cases made some pretty bad arguments. “Encryption is harmful”, well, guess what, lack of encryption is more harmful! We absolutely can’t be saying that companies are damned if they do, damned if they don’t, and we definitely don’t want to be restricting encryption. As rightly pointed out by the author, mental harms are complex, multifaceted, and it’s difficult to determine a reliable causality; I don’t know enough about the people in question to speak on the analysis that happened here, but it probably wasn’t sufficient. But, that doesn’t mean that such an analysis is impossible, and being on social media for 16 hours a day is certainly a compelling starting point.

So, more broadly speaking, what should we do about it? I don’t know! There’s a needle that needs to be threaded, and I’m not the one to thread it. The big algorithmic social media sites are really bad and I love every cut that someone gets against them, but there were certainly arguments being made on the plaintiff’s side (encryption? Come on!) that were pure BS and bad for everyone.

All that being said, one thing we absolutely must not do is misrepresent the actual harm and problems caused by the systems these companies created, and we need some kind of law or regulation to end it and make them liable for it. Hell, a basic goddamn privacy law would probably get us most of the way there on its own just by cutting down on the fodder that goes into their algorithms. Good luck to us all on that.

realitymonster (profile) says:

Re:

Furthermore, “every editorial decision” is not now a “design choice”; just the design choices. Providers are–still!–not publishers or speakers of third-party content, and–still!–are not liable for moderation. Nothing in these lawsuits can be reasonably construed to impact decisions to publish–or not–specific content, which is all 230 protects. These lawsuits are, fully, not about the content, any more than California’s ban on Amazon’s dark patterns are a ban on having a web store. This lawsuits are fundamentally not about speech, because the problem is not the speech, but the system around the speech.

Thanks for this–this was pretty much exactly what I was thinking when I saw this article.

Facebook et al are not necessarily taking a neutral stance on how content gets to you. Mastodon has no algorithm, for instance. You follow what you follow. If you search for something, you might find it. It offers you nothing without a specific request.

The algorithmically driven systems proffer content to you and measure how long you linger on it, not just whether you explicitly like it or not. The fact that you initially made the choice to BE on the site might seem like it should be more of a factor, but a) the sites have changed dramatically in the last few years; and b) a lot of the people being harmed arguably could not have made any sort of informed decision. Instagram used to be a photo sharing site that where you and your friends follow each other. That functionality still technically exists, but it is mostly not that at all anymore.

In any case, I’m not a fan of this take; it tries to pretend that Facebook was just sitting around with a bunch of content in a back room and people were seeking it out, rather than Facebook making specific decisions to show certain things to certain people under certain circumstances. That’s not a neutral publisher.

This comment has been deemed insightful by the community.
Arianity (profile) says:

Re:

Instagram’s targeted recommendation and addiction algorithm dark patterns are not the same thing as “newest first”.

TD has a bad habit of word games around the term algorithm, using the fact that they’re both technically algorithms to imply they can’t be distinguished. It’s a wordgame meant to take advantage of the fact that people use “algorithm” as colloquial shorthand for certain types of content curation, even though technically stuff like chronological is an ‘algorithm’.

realitymonster (profile) says:

Re: Re: Re:2

This is still a consistent problem here. An algorithm is, of course, just a series of steps that you take to make a decision.

But to say that all algorithms are the same is like trying to make the claim that all mammals are the same, or all vertebrates are the same, or all LIFE is the same.

Depending on the context, it’s absurd and misleading to make statements where ‘newest first’ and ‘deep knowledge about a user, showing content that makes them mentally ill’ are morally or technically equivalent. They’re not.

TD regularly takes the effective position that all algorithms are fundamentally the same, so there’s no point in trying to disambiguate them or govern their outcomes. It’s absurd on its face.

Arianity (profile) says:

Re: Re: Re:

It’s showing an example of the difference between how lawyers think and how the general public thinks.

The part I find frustrating is that it usually doesn’t actually show an actual difference. People use it as a shorthand, but if you stop them and ask, they can (~more or less) explain the difference. There’s a shared understanding of what people mean by it, it’s just implicit. A way you can tell is that when people talk about e.g. Twitter’s “algorithm”, no one ever gets confused thinking it’s about chronological.

Now, if it’s specifically a discussion where the distinction matters, like legislative language or whatever, that’s one thing.

It’s doubly frustrating, because there really are “common sense” misconceptions laypeople have that need to be called out, but more often than not it’s just a cheap gotcha to dismiss someone’s substantive points.

Nemo_bis (profile) says:

Re: Can we find content-neutral design choices?

Thanks Mike for spelling out the implications. I’m a regular reader of Eric Goldman’s blog post, but still I had not fully grasped some of his points until now.

I agree that the “design choices” argument can be abused. I have no insight on how likely that is to happen.

I came to comment on a couple arguments in Mike’s post which I found less convincing. Azuaron and Arianity have already pointed them out.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. […]
The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This is an interesting argument but I think you have overplayed it a bit.

First of all, in the hypothetical case of a website literally devoted to paint drying videos, it would be an example of how this new legal standard actually works quite well. The editorial decision of the website is to only post paint drying videos; nobody is going to go after that. Their design choices, such as a recommendation algorithm to only promote the most addictive paint drying videos, could be questioned. Would that affect their editorial decisions? There’s no reason to believe it would, importantly because as you say probably nobody is going to sue.

But of course that was only an extreme example. The argument is that design choices and content choices are inextricably linked and that every design choice is also a moderation decision, just like the addictive design (a) requires the content to be posted and (b) affects what kind of content people are shown.

On (a), I think it’s irrelevant. Sure, users have their role in contributing to the addictiveness by doing exactly what they are expected to do, but this not move the responsibility. This sort of argument works for defamation or copyright infringement because you can shift the responsibility to the person who actually published the problematic content. But who else should be sued here instead? Should plaintiff instead file a lawsuit against the millions of authors of addictive posts? Of course not, that doesn’t make sense.

On (b), it is indeed easy to find examples where a “design choice” is effectively a decision to surface specific content over other. Facebook in the past 10 years has repeatedly and very publicly “tweaked” their home feeds in order to increase or decrease the amount of “news”, “political content”, “personal updates” or other such categories, simply by using some parameters that on the face of it could appear content-neutral (such as whether a post comes from a “friend” or someone further apart in the social graph, or how many interactions they got).

Does this mean that every design choice is going to be like that? Or is this “a slippery slope argument with no evidence that such a slope exists” like Azuaron says? I think it’s possible to find design choices which are clearly content neutral. Innocuous design choices, such as the color palette of the website, are likely not to affect the content. Some very impactful design choices, such as some sort of parental control where some features of Instagram become unavailable after e.g. 1h of continuous use in the day (and maybe you can only look at DMs), can probably still be content neutral. (That doesn’t mean that they’re necessarily good.)

And vice versa, is there a risk that any content-specific objection to moderation or publishing decisions can be smuggled in the guise of a design choice argument? That, I think, is the main worry here.

Finally, even if we do find a way to identify “bad” design choices which are subject to litigation and “ok” design choices which should not be, the question remains how do we make sure that the lawsuits end up following such “good” standard.

Eric Goldman and Mike argue that they won’t, it will be impossible, it will be a deluge of lawsuits and bad rulings.

Others say it will be fine, only the most egregious violators like Facebook/Snapchat/TikTok will actually get dragged to court because look how badly they had to screw up before finally some judge allowed this sort of case to go forward.

I say, it’s going to be somewhere in between and it will be a mess, so you will need regulation (probably legislation) to clear up the mess. Congress will probably end up doing something if this blows up for big and small actors alike. (Admittedly, Congressional inaction or even misguided action is more likely if such lawsuits only end up hurting the small guys, just as we see in copyright law.)

Or in other words, you might end up with something not too different from the EU’s Digital Services Act, which is a mess and definitely has some component of content-specific censorship risk, but also has some good elements. A legislator can more easily draw the line and say that only companies above a certain amount of revenue are going to be subject to some regulation, as the EU did in the DSA.

California is allegedly considering adopting some ideas from the DSA and DMA (Digital Markets Act), if you believe in reading tea leaves from the meeting between Gavin Newsom and Teresa Ribeira. Some experimentation at state level may allow a more orderly result than playing lottery at the courtroom in the hope SCOTUS mops up the eventual mess.

Anonymous Coward says:

Re:

“ The people and companies who make products have always been responsible for the designs of their products when those designs cause harm, from the lawn dart to the Pinto”

Given this standard, why would newest first be considered different, or exempt from liability if it shows you harmful content, or if the user is addicted to refreshing to see the latest posts?

BootsTheory (profile) says:

Not to step on any editorial toes, but I think you could stand to move your hyperlink to the quoted article over to this line:

As Eric Goldman pointed out in his response to the verdicts:

Where it’s been placed is honestly bizarre to me. If I hadn’t checked every link I would have thought you hadn’t linked directly to the (extensively-cited) post at all.

Eurydice (profile) says:

Im 35 and started using social media when I was 15. I’ve also got a host mental illnesses. I dont think they are related. Like most people, I have been affected by media I saw, but blaming it for my trauma-related problems seems insane to me.

Mike, you know we totally disagree about AI, but you’re absolutely right about this. Therapy-talk by nonexperts has really screwed us. Did any therapists, psychologists, or other neuro-psych professionals testify in this trial? I have a feeling not… at least not ones that don’t stand to gain from it. I have a lot of addicts in my family, and I find it super insulting that these lawyers compare social media addiction to opiod addiction. Im sorry, are there people out there right now convulsing and throwing up or even dying because they didn’t use TikTok today and their body can’t handle the stress? I dont f*cking think so! It’s just like when people say “porn addiction is a real problem”… no, it’s not. It’s an addiction in the way that gaming is an addiction, that is, behaviorally. It’s a habit and coping mechanism, but it’s not a physical, life-threatening addiction like substance addiction usually is. What an absolute demeaning of people who really struggle with substance abuse.

Are there people out there getting physically ill because they didnt gamble or play fortnite today? No. What a joke. Not every bad habit is an addiction, and if we’re going to sue people for lives being ruined by bad actors adding things to make their products addictive, where the hell is the class action lawsuit against food companies for their underhanded promotion of sugar? At least that actually gives you heart disease and diabetes.

Anonymous Coward says:

I’d like to point out that a number of these features, notifications, infinite scrolling, are just features that any modern site might have.

Notifications are handy, because otherwise someone has to go to a site to check if there is anything new.

Infinite scrolling is useful, especially on mobile, as otherwise someone might have to wait for posts to load in. Keep in mind a website isn’t just designed for someone living in the city with top notch internet.

Infinite scrolling is present in more modern forum systems, Mastodon, Reddit, and so on. These are not Facebook scale platforms, these are the little guys (by comparison). I’m not even saying Reddit is great, their aggressive moderation practices hit legitimate users.

One of the problems with how these people approach these things, is that they treat any feature which a user might find convenient as if it were part of a conspiracy. There are things I don’t like about these platforms, but the idea of bureaucrats or judges getting involved here should scare the hell out of you.

Asa says:

Nonsense

1) The plaintiff had to prove the company knew a specific feature was causing harm and concealed it.

2) A small blog or a forum doesn’t have internal “dopamine labs” or memos about “winning big with tweens.”

3) The bar for winning a design defect case is so high that it naturally protects small players who aren’t weaponizing psychology.

Don’t have a dopamine lab and build features designed to maximize on-device time with carefully dispensed dopamine hits, then attempt to conceal that, and you won’t have a problem.

Arianity (profile) says:

The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

That two things need to go together does not imply there is no distinction between those things. For instance, does the “addictive design” do something if the underlying UGC is replaced with first party content?

All of those are “design choices” that could, theoretically, be blamed for some downstream harm.

Not really. Something like sorting by newest first has no intent to it. There’s a meaningful distinction there. It is, essentially, content-neutral. You can’t really make an argument for blaming it for downstream harm. For instance, lets say an algorithm preferentially boosts defamatory content. Is all the harm downstream, because it’s all user generated? It’s clearly expressive in some way that chronological is not. Under your theory about violative action, it contributes to a violative action.

The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

There’s a really tricky distinction there, that you’re kind of gliding over. The “but-for” stuff is more complex than I think people want it to be, and doesn’t square with stuff like Roommates, or what Wyden has said about AI content not being 230 protected.(To be clear, I think you’re getting to mostly the right result, but with the wrong reasoning)

But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,

The problem is this opens up incentives in the other direction, when they did know and did it anyway. And Facebook has it’s history of exactly those bad-faith deliberations.

Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again.

By this logic, any situation where companies could face discovery would need immunity. These sorts of “smoking guns” happen all the time in trials, and not just tech or social media ones. These companies, including Facebook and Google, do still in fact raise concerns. Hell, I’m pretty sure you’ve written articles about it happening… to literally Facebook.

that platforms are liable for the “design” of how they present the third-party content that is their entire reason for existing — will not stay confined to companies you don’t like.

You might find that people are ok with that, even for companies they like, just as they do outside of 230 contexts. There’s a reason I’ve been banging this drum. If you insist on full immunity or bust, there’s a good chance people (especially a jury) are going to pick bust. Bad defendants make bad law, but you make it an exponentially harder lift when you try to give bad defendants unconditional civil protection. There’s no release valve, and if you tie yourself to the anchor that is Meta, you’re going to sink with it.

That said, this looks likely to end up closer to Cubby than Stratmont? Stratmont imposed strict liability. This doesn’t seem to be that. (Although this doesn’t clearly delineate a line or standard is which is uh… not good). And you can’t even dodge this like you can with moderation, by not moderating, either.

“don’t build anything that makes user-generated content engaging, discoverable, or easy to access, because if someone is harmed by that content, the way you presented it makes you liable.”

This would kill Google and Facebook. You said Meta can absorb tens of billions. At $375m/case, that’s approximately 267 cases, assuming the penalty never goes up, not counting legal costs, etc. That’s not endless.

Nemo_bis (profile) says:

Re: On the need of immunity

By this logic, any situation where companies could face discovery would need immunity. These sorts of “smoking guns” happen all the time in trials, and not just tech or social media ones.

They happen all the time but also the extent of the risk is limited because not so many people have standing to bring a lawsuit and it’s relatively easy to draw a line on who has standing. With social media, practically anyone online can claim to have standing to bring lawsuits. Often they are right, but there’s a need to draw a line somewhere to avoid frivolous litigation, and it’s going to be hard.

Anonymous Coward says:

Re:

“ Not really. Something like sorting by newest first has no intent to it. There’s a meaningful distinction there. It is, essentially, content-neutral. You can’t really make an argument for blaming it for downstream harm.”

This is a good argument but unfortunately I think the issue is that the cases do not make that distinction. At best, we could hope Congress writes this distinction into law and makes a carve out for some standardized/non-algorithmic filtering methods. But that will either take too long or not get done at all, and until then the precedent seems to not make that distinction.

Rick O'Shea (profile) says:

long tail

The litigious nature of Americans is well-known.
It’s good that the lawyers are salivating over the “big” social-media liability here.
But outside of the USA, distributed social media, like Mastodon and other ActivityPub services, will hardly notice because it’s not worth anyone’s effort to take a site of 50 users to court to glean compensatory damages; that can’t be enforced in the USA anyway.

It’s true that sites like Techdirt will be targeted, but that’s what you get for establishing a company in a law-less regime. You made your bed, now lie in it.

In my humble opinion, Mike waxes eloquently, as a front-man for Bluesky, that the American law of section 230 is being assaulted, but no other country will notice as this resolves over the long tail of social media.

Anonymous Coward says:

Re:

Nobody is saying platforms can’t use algorithms besides ‘newest first’, just that if they are going to be used, there should be some responsibility taken for their use. They should not be able to hide behind Section 230 and say they are not publishing anything when they are picking and choosing what you see through whatever means. ‘Creating Engagement’ should not provide a pass for other, more negative effects.

Anonymous Coward says:

Re: Re: Re:

The sort algorithm is not the issue, the comparison metric or sort key is.

But that aside, letting the users choose the parameters and weightings would be fine, and even “magic” sorting might be OK if it weren’t combined with the other harmful features.

That said, the switch from chronological to “smart” sorting that was one of the first really big steps in destroying the usability of Facebook as tool to communicate with your friends, and shoving in ever-increasing amounts of “suggested” content instead of the posts from people you’d actually asked for is generally regarded as a bad thing by users. “Revert every change in the last 20 years ” would be a major improvement to Facebook.

Mts. O’Leary says:

Hard Questions

This is a thoughtful and important commentary that should give pause to those who cheered these design flaws predicts. But are we really in a situation where there is either little regulators can do to curb harms caused people by Meta’s design choices or nothing they cannot do to this end? Perhaps there are design associated with harms that might be the subject of liability judgment along with other choices thst cannot be. This is most stark with AI chat boxes. When their algorithms support suicidal wishes or provide information on how to kill oneself because they are designed to keep people engaged and approving of users suggestions does thus, maybe the platforms should be held responsible for deaths they seem to have contributed to. Are there similar demands that can/ should be placed on designers like Mets – e.g. should the ability to continue scrolling be cut off for, say, ten minutes after an hour of doing this. Is the harm prevented worth the inconvenience to many more people? Or are there psychological tricks that sites use to keep people engaged that are intentional design choices which should open a company to liability? I don’t know enough to suggest answers, but I expect some people do. Maybe an expert group should be convened to draw lines that are done where between everything is protected and nothing is.

Nadeem says:

Broken argument

I think there is a difference between getting in trouble because of what people post on your service and what posts you algorithmically serve to people to keep them on your platform. Creating that sort of feedback loop and the resulting damage is unquestionably the responsibility of the platform.
Since the platforms strive to maintain engagement, they need to be responsible for the type of engagement they foster.

Greg Caruso (user link) says:

Meta Judgement

The design of websites, games, and gambling sites can be addictive. There are many tricks, traps and gambits that many people fall into. (You can Google or AI them.) As a society, we should question unleashing them on unprepared adults, but unleashing them on children is clearly not fair.

Yes, if the content sucked the trap would not work. (So a bear-trap is not a bear-trap when there is no bait?) But that is a different case. The content is good, and the trap is very effective.

Mas(nick) Appeal says:

Meta's Jury Loss

At least one or two large issues are underplayed by Masnick’s piece:

Meta is not just neutrally providing third party content. It is actively selecting which third party content to provide to a user, without regard to harm, in order to increase the user’s engagement and Meta’s profits. The argument that “all we’re doing is providing forum and allowing people to do what they want” is somewhat similar to the unsuccessful argument that a bar should not be responsible for serving an intoxicated customer or a problem drinker who is subsequently harmed or harms others. The fact that Meta selects content through an algorithm instead of by a human, such as an editor of Reader’s Digest picking articles to include in an issue, does not change the fact that Meta selects which content to present to the user IN ORDER TO INCREASE ITS OWN PROFITS.

Meta has the technological ability to slow down digestion of harmful content, but chooses not to do so. Masnick says the ruling increases liability for curation, but actually Meta has chosen to curate only for engagement, but not to curate to prevent harm. Under a traditional lens of tort liability, defendants who used reasonable efforts to prevent harm are usually much less liable that defendants who ignore potential harms. Curation to prevent harm to users is much different from curation to increase profits, and these different acts should be lumped under one heading of “curation.”

When Section 230 was created, social media was perceived as a benefit to society. Today, social media products are some of the most destructive parts of society. They have amassed tremendous amounts of data and more wealth than any products ever, and the customers have become increasingly emotionally and intellectually impoverished. It is wrong to apply social theories for the conditions that existed thirty years ago to the problems of today.

Chris Buckridge says:

They're all choices by people...

Thank you for this very insightful, and concerning piece. While the logic feels quite strong throughout, there are aspects that don’t sit comfortably with me, and this sentence particularly crystalized some concerns:

Like infinite scroll and autoplay, it is inert without the choices of bad actors — choices made by people, not by the platform’s design.

I have two issues here (one with each clause). The first is the assumption that infinite scroll and autoplay are inert without the choices of bad actors – in this case, implying those actors are the content creators. However, I’d question whether the danger in those design choices rely on bad actors, or even “bad” content – posting a single video about your concerns about chemtrails, or why you think the Democrats are evil pedophiles is not “bad” in and of itself. However, a design choice that prioritizes presenting many such videos to user with the deliberate intention of generating outrage is something that society can more likely agree is bad.

Which leads to my second concern: “choices made by people, not by the platform’s design”. But the platform’s design is a choice made by people (in this case, the people responsible for the site or platform). And if the individuals that post problematic content on a platform are not immune from consequences of their posting, should the platform operators themselves be immune to consequences of their contribution to the space? Section 230 protects them from liability for third party content; site design aspects are not the work of a third party, but of the site/platform operators themselves.

One final reaction to the following:

The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle.

Agreed, the causality is indirect, distributed across multiple factors, and inherently complex. That should not mean that those who own and operate social media platforms [for profit, and constitutionally designed to prioritize profit-making] should get a pass for their role in that ecosystem, particularly when they appear to have deliberately (or, at least, happily) impeded the kind of research that could lead to greater clarity about that role.

Again, very much appreciate the thought provocation!

mmm says:

Impenetrable Legal Forcefield?

I understand the concerns being expressed in the article. As I understand it, the literature on the effect of social media on mental health is somewhat muddy, I do have doubts about the rulings, and the attempts to defang encryption are extremely alarming. That being said, the arguments being made here are so broad that they essentially construct an impenetrable legal forcefield that protects social media companies from any litigation.

I don’t think that any attempt to sue on the basis of design choices must be a Trojan Horse trying to censor speech. There are obviously design choices that would warrant legal consequences. To use an admittedly extreme example, if a company were to design a recommendation algorithm that specifically tries to cause users emotional distress and drive them to suicide, I must imagine that people would have grounds to sue them, even though that algorithm wouldn’t have the capacity to harm people if users only uploaded pictures of paint drying. Addiction is harmful, so if a company delivers content in a matter designed to foster addiction, then surely there are similarly grounds for a lawsuit, even if technically speaking, that addictive power is contingent on users not only uploading paint drying. There are obviously questions about where the line is drawn between something being addictive and merely engaging that will have to be hashed out in court, but swatting down any lawsuit attacking the design of a social medai site in the name of free speech is throwing the baby out with the bathwater.

Then, there’s the concerns about frivolous lawsuits and discovery ruining the internet. These concerns, would, of course, apply to any other non-tech industry. Food companies make a product that is eaten by loads of people, and they can easily be bombarded with bad faith lawsuits claiming they got poisoned or whatever by the food. And, any internal documents where people discuss the safety of the food could, in theory, be seized and waved around in court to hurt the company. Would anyone be comfortable with giving food companies legal immunity out of fear of them being hurt by too many frivolous lawsuits or backing off of food safety for fear of internal documents being used against them in court? Probably not.

If you can’t sue a company because other people will file frivolous lawsuits, then you can’t ever sue a company. It’s interesting that the article goes on to complain about the use of encryption by bad actors being used as an excuse to take away everyone’s right to strong encryption. More babies, more bathwater.

When a social media company chooses how to deliver content, even content that other people upload, they are making a meaningful design choice, a design choice that can cause harms. There should be an avenue for holding them accountable for any harms those choices may cause.

Dennis F. Heffernan (profile) says:

Put Me Down For Pro-Verdict

I’ve said it here before: social media does not need to be reformed. It needs to be destroyed.

The net effect of letting 90 million people spew whatever nonsense they want with no oversight is the Trump administration. It is far too easy for bad actors to do real damage to our society through social media. Whatever good you may think has come from it is dwarfed by the problems it has created.

I don’t enjoy saying this. I grew up on BBS’s and local chat systems. Been here practically since the beginning. The good guys lost, it’s all turned to garbage.

And no, the irony of using social media to argue for the end of social media is not lost on me.

Dennis F. Heffernan (profile) says:

Freedom isn't free

“Free speech” does not mean “I can say whatever the hell I want and damn the consequences”. Freedom comes with responsibilities and the current model of social media isn’t enforcing those responsibilities.

The tidal wave of nonsense that comes over social media can’t be moderated or even edited to any meaningful degree. The stream of “free speech” shouldn’t be any bigger than the ability to do so, but right now it’s orders of magnitude bigger than that.

N.B. I have no objection to anyone (e.g.) starting a blog and posting whatever they want on it. They’ll be responsible for their own content. Under a “no 230” model I’d recommend they moderate their comment section, if any, of course. The problem comes when all the nonsense is focused and made available in one place.

missrao (profile) says:

The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9

Letting kids watch selected YouTube videos or channels under time limits is reasonable, but if she claims she was watching for long periods of time with no supervision or guidance, this isn’t a social media problem: This is a parent problem.

And when you’re an adult, it becomes a YOU problem. There’s a difference between an addiction and an unwillingness to make painful changes. Most things that are good for us are hard, unfortunately. Otherwise I’d be super fit!

Paul says:

So ial media verdict

“The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9,…” The responsibility of users, or in this case, parents, is rarely discussed. A 6- year old using YouTube? Not OK. We restrict minors from using alcohol, driving cars, owning guns (in rational States), &c. But we give a 6-yr old a cell phone and (un)social media? We’re all nuts!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...