David Boies’ Baseless Lawsuit Blames Meta Because Kids Like Instagram Too Much

from the that's-not-how-any-of-this-works dept

In today’s episode of ‘Won’t Someone Think of the Children?!’, celebrity attorney David Boies is leading a baseless charge against Meta, claiming Instagram is inherently harmful to kids. Spoiler alert: it’s not. This one was filed last month and covered in the Washington Post, though without a link to the complaint, because the Washington Post hates you. In exchange, I won’t link to the Washington Post story, but directly to the complaint.

The fact that David Boies is involved should already ring alarm bells. Remember, Boies and his firm, Boies Schiller Flexner, have been involved in messes like running a surveillance operation against Harvey Weinstein’s accusers. He worked with RFK Jr. trying to silence bloggers for their criticism. He was also on Theranos’ board and was part of the campaign of that company to punish whistleblowers.

Indeed, Boies’s string of very icky connections and practices has resulted in many lawyers leaving his firm to avoid the association.

So, I’m sorry, but in general, it’s difficult to believe that a lawsuit like this is anything more than a blatant publicity and money grab when David Boies is involved. He doesn’t exactly have a track record of supporting the little guy.

And looking at the actual complaint does little to take away from that first impression. I’m not going to go through all of this again, but we’ve spent the past few years debunking the false claim that social media is inherently harmful to children. The research simply does not support this claim at all.

Yes, there is some evidence that kids are facing a mental health crisis. However, the actual research from actual experts suggests it’s a combination of factors causing it, none of which really appear to be social media. Part of it may be the lack of spaces for kids to be kids. Part of it may be more awareness of mental health issues, and new guidelines encouraging doctors to look for and report such issues. Part of it may be the fucking times we live in.

Blaming social media is not supported by the data. It’s the coward’s way out. It’s old people screaming at the clouds about the kids these days, without wanting to put in the time and resources to solve actual problems.

It’s totally understandable that parents with children in crisis are concerned. They should be! It’s horrible. But misdiagnosing the problem doesn’t help. It just placates adults without solving the real underlying issues.

But this entire lawsuit is based on this false premise, with some other misunderstandings sprinkled in along the way. While the desire to protect kids is admirable, this misguided lawsuit will only make it harder to address the real issues affecting young people’s mental health.

The lawsuit is bad. It’s laughably, sanctionably bad. It starts out with the typical nonsense moral panic comparing social media to actually addictive substances.

This country universally bans minor access to other addictive products, like tobacco and alcohol, because of the physical and psychological damage such products can inflict. Social media is no different, and Meta’s own documents prove that it knows its products harm children. Nonetheless, Meta has done nothing to improve its social media products or limit their access to young users.

First of all, no, the country does not “universally ban” minor access to all addictive products. Sugar is also somewhat addictive, and we do not ban it. But, more importantly, social media is not a substance. It’s speech. And we don’t ban access to speech. It’s like an immediate tell. Any time someone compares social media to actual poisons and toxins, you know they’re full of shit.

Second, the “documentation” that everyone uses to claim that Meta “knows its products harm children” is the various studies which they used as part of an internal research team trying to help make the products safer and better for kids.

But because the media (and grandstanding fools like Boies) falsely portray it as “oh they knew about it!”, they are guaranteeing that no internet company will ever study this stuff ever again. The reason to study it was to try to minimize the impact. But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.

Much of the rest of this is just speculative nonsense about how features like “likes” and notifications are somehow inherently damaging to kids based on feels.

Meta is aware that the developing brains of young users are particularly vulnerable to certain forms of manipulation, and it affirmatively chooses to exploit those vulnerabilities through targeted features such as recommendation algorithms; social comparison functions such as “Likes,” “Live,” and “Reels”; audiovisual and haptic alerts (that recall young users to Instagram, even while at school and late in the night); visual filter features known to promote young users’ body dysmorphia; and content-presentation formats (such as infinite scroll) designed to discourage young users’ attempts to self-regulate and disengage from Instagram.

It amazes me how many of these discussions focus on “infinite scroll” as if it is obviously evil. I’ve yet to see any data that supports that claim. It’s just taken on faith. And, of course, the underlying issue with “infinite scroll” is not the scroll, but the content. If there were no desirable content, no “infinite scroll” is going to keep people on any platform.

So what they’re really complaining about is “this content is too desirable.”

And that’s not against the law.

Research shows that young people’s use of Meta’s products is associated with depression, anxiety, insomnia, interference with education and daily life, and other negative outcomes. Indeed, Meta’s own internal research demonstrates that use of Instagram results in such harms, and yet it has done nothing to lessen those harms and has failed to issue any meaningful warnings about its products or limit youth access. Instead, Meta has encouraged parents to allow their children to use Meta’s products, publicly contending that banning children from Instagram will cause “social ostracization.”

Again, this is false and misleading. Note that they say “associated” with those things, because no study has shown any causal reaction. The closest they’ve come is that those who are already dealing with depression and anxiety may choose to use social media more often. And that is an issue, and one that should be dealt with. But insisting that social media is inherently harmful to kids won’t help. The actual studies show that for most kids, it’s either neutral or helpful.

Supplying harmful products to children is unlawful in every jurisdiction in this country, under both state and federal law and basic principles of products liability. And yet, that is what Meta does every hour of every day of every year

This is nonsense. It’s not the product that’s harmful. It’s that there’s a moral panic full of boomers like Boies who don’t understand modern technology and want to blame Facebook for kids not liking them. Over and over again this issue has been studied and it has been shown that there is no inherent harm from social media. Claiming otherwise is what could do real harm to children by telling them the thing that they rely on every day to socialize with friends and find information is somehow evil and must be stopped.

Indeed, actual researchers have found that the real crisis for teens these days is the lack of social spaces where kids can be kids. Removing social media from those kids would only make that problem worse.

So, instead, we have a lawsuit backed by some of the most famous lawyers on the planet, pushing a nonsense, conspiracy-theory-laden moral panic. They argue that because kids like Instagram, Meta must be punished.

There’s a lot more in the actual lawsuit, but it only gets dumber.

If this lawsuit succeeds, it will be fair game on basically any popular app that kids like. This is a recipe for disaster. We will see tons of lawsuits, and apps aggressively blocking kids from using their services, cutting off tons of kids who would find those services useful and not problematic. It will also cut off kids from ways of communicating with family and friends, as well as researching information and learning about the world.

Filed Under: , , , , , , ,
Companies: instagram, meta

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “David Boies’ Baseless Lawsuit Blames Meta Because Kids Like Instagram Too Much”

Subscribe: RSS Leave a comment
28 Comments
This comment has been deemed insightful by the community.
blakestacey (profile) says:

This is one of the very, very few times in life when I will actually hand it to Rand Paul:

For example, if an online service uses infinite scrolling to promote Shakespeare’s works, or algebra problems, or the history of the Roman Empire, would any lawmaker consider that harmful?

PaulT (profile) says:

Re:

Yeah, I hate it when people who are hateful say things that are acceptable, but that’s just reality, nobody’s completely virtuous or completely evil.

But, on that subject, there was a Twitter account that regularly tweeted out the Declaration Of Independence. Because it was in the chunks required by Twitter and people saw parts of it before they realised what it was, they decided it was communist.

https://apnews.com/general-news-united-states-government-45c9fd6838a8450a849d95ff7daefa34

Given that, I have absolutely no doubt that some people would consider infinitely scrolling any of that stuff, or even the Bible, to be harmful because social media doesn’t guarantee you start at the beginning of the thread, and Karens don’t stop to consider context.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

If knowledge = liability then ignorance is the only safe position

How to spot a PR stunt lawsuit against social media: Rather than focus on any actual harms the lawsuit misquotes or ignores what studies have been done and simply asserts that social media is harmful, as though the accusation itself is it’s own proof.

But because the media (and grandstanding fools like Boies) falsely portray it as “oh they knew about it!”, they are guaranteeing that no internet company will ever study this stuff ever again. The reason to study it was to try to minimize the impact. But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.

I’d call that ironic but that would require that those pushing the ‘Social media is the second coming of the anti-christ!’ fearmongering to actually care about children in the first place. Rather, given all they seem to care about is exploiting kids for their own gain ensuring that online platforms never try to study what is and is not good for their users seems closer to a feature rather than a bug for that lot.

Anonymous Coward says:

Re:

According to Wikipedia:

The school was placed in lockdown at around 10:20 am and law enforcement responded at around 10:23 am. The Georgia Bureau of Investigation director said that the sheriff’s office received calls of an active shooter at the school at around 10:20 am local time, with responders arriving within minutes. The school resource officers engaged the suspect within minutes and he surrendered to them.

So I guess Georgia State Patrol learned a good lesson from the Uvalde school shooting.

This comment has been flagged by the community. Click here to show it.

Arianity says:

Second, the “documentation” that everyone uses to claim that Meta “knows its products harm children” is the various studies which they used as part of an internal research team

I don’t see why this is a bad piece of evidence to use. Especially since companies like Meta generally restrict outside access to data, and you keep asking for evidence. This is one of the few ways to get said evidence, until/unless we mandate outside research access.

Obviously it doesn’t mean you should misuse it, but your suggestion of not using it at all is not reasonable, either. Companies releasing that data is going to lead to awkward conversations any time something bad comes out (and this is true even if it wasn’t misused. Accurately quoting those internal reports gives them an incentive not to do them. Which is why they’re so restrictive in the first place). But the solution can’t be to pretend it doesn’t exist, that’s obviously flawed.

trying to help make the products safer and better for kids.

If it wasn’t harmful in some way, there’d be nothing to improve. You can’t have it both ways.

The reason to study it was to try to minimize the impact

That isn’t mutually exclusive with knowing about something. Never mind companies imperfect records with implementing changes from studies such as these ones (Meta has plenty of public ones).

So what they’re really complaining about is “this content is too desirable.”

I mean, when you boil it down, that’s essentially what addiction or over-use is, yes.

It amazes me how many of these discussions focus on “infinite scroll” as if it is obviously evil. I’ve yet to see any data that supports that claim. It’s just taken on faith.

The reason there’s a focus on things like this, is because it’s an example (that is visible without seeing the back end) showing that companies are tailoring their products to maximize engagement on the platform. Particularly when those companies remove different options like chronological.

You can argue whether that’s “evil”, but it’s certainly a foot in the door to establishing some base facts.

Note that they say “associated” with those things, because no study has shown any causal reaction.

Showing causality in social sciences is insanely difficult, even when it’s there. Unless you have the access and control to do something like diff in diff, it’s really hard to nail down. There’s a difference between looking and not finding it, and those studies not being well suited to answering causality to begin with.

And, of course, the underlying issue with “infinite scroll” is not the scroll, but the content.

Format matters. There’s a reason companies different content delivery methods. Yes, you need the content as well, but that just says you need both. It’s not just content. Same reason companies optimize things like slots (or flavoring in nicotine products, or making cigarettes more kid-friendly before that got banned). Yes, the underlying game matters, but those tweaks can make it meaningfully more engaging.

But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.

That’s an argument for legislation to change the incentives so they want to try to fix things. This is a policy choice.

But, more importantly, social media is not a substance. It’s speech. And we don’t ban access to speech.

We absolutely already do restrict children’s exposure to some speech.

This comment has been flagged by the community. Click here to show it.

Arianity says:

Re: Re:

Because it doesn’t actually exist.

The article seems to be going further than just the misuse being the problem, hence why I mentioned it. If it had just said that, I wouldn’t have brought it up.

This alone negates anything else you follow up with in your screed.

Except it doesn’t. Both because I specifically addressed not misusing it and particularly since most of my “screed” isn’t related to that point, and is applicable regardless.

The lawsuit can be bad, and misusing info that doesn’t show what it claims, while there are also other parts of the article that go too far. They’re not mutually exclusive.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

The article seems to be going further than just the misuse being the problem

If someone claims a study proves that social media directly causes distinct harms, but either the study doesn’t actually do that or the claimant refuses to name the distinct harms in clear detail, that isn’t a “misuse” of the study⁠—it’s the knowing and deliberate use of vagueness to whip up fears and emotionally manipulate people.

Arianity says:

Re: Re: Re:2

If someone claims a study proves that social media directly causes distinct harms, but either the study doesn’t actually do that or the claimant refuses to name the distinct harms in clear detail, that isn’t a “misuse” of the study⁠—it’s the knowing and deliberate use of vagueness to whip up fears and emotionally manipulate people.

I’m not sure what distinction you’re making there. That’s misusing it? Yes, it’s a knowing and deliberate misuse

This comment has been flagged by the community. Click here to show it.

Arianity says:

Re: Re:

Functional people don’t argue that deranged hallucinations form a good basis for a lawsuit,

It’s a good thing I’m not arguing that, then.

It can be a bad lawsuit (it is) and the article also goes too far in some places trying to dunk on it. Just because it’s a lawsuit based on deranged hallucinations doesn’t mean we need to circlejerk so hard over it we get sloppy in our arguments.

Stephen T. Stone (profile) says:

Re: Re: Re:

the article also goes too far in some places trying to dunk on it

No, it doesn’t. Shitty lawsuits based on hallucinations of what the complaining parties wish were facts deserve to be dunked on so thoroughly that anyone who isn’t similarly hallucinatory sees such lawsuits for what they are: desperate cashgrabs aimed at deep pockets.

Arianity says:

Re: Re: Re:2

No, it doesn’t.

Why not, specifically?

Shitty lawsuits based on hallucinations of what the complaining parties wish were facts deserve to be dunked on so thoroughly that anyone who isn’t similarly hallucinatory sees such lawsuits for what they are:

Absolutely. My issue is not with the dunking itself (this lawsuit is fucking bad). Dunk away. But I think it would be better if it was done while being accurate.

Saying stuff like And we don’t ban access to speech when dunking on them, when it’s wrong (and obviously, factually so), doesn’t seem like it actually helps the dunking. It just hurts it. Not only do we ban access to speech, some laws are specifically targeted at kids. From various types of say, sexual content/obscenity/indecency laws (USC 1470 , CIPA, the law in Ginsberg v New York, etc). We ban access to speech all the time (whether that’s good or not is another story, but we do do it).

He’s making a valid point (we almost never restrict speech, even for minors), and then goes just a bit too far for that rhetorical flourish. It’s so close, and so easy to avoid.

I want to shit all over people like Boies, but I feel like those sorts mistakes detract from that. We shouldn’t be making mistakes like that, especially when we’re actively dunking on people like him for being wrong. We’re supposed to be better than people like him.

Anonymous Coward says:

Re:

We absolutely already do restrict children’s exposure to some speech.

Booze, cigarettes, and gambling are not speech, and as for bleeping on TV, that restricts adults’ exposure to speech they should have the right to hear if they want to. Is that seriously the argument you want to make? “Well, restricting access for adults is perfectly A-OK if it protects the kids.”

Arianity says:

Re: Re:

Booze, cigarettes, and gambling are not speech,

I didn’t say they were, and I specifically focused on speech.

that restricts adults’ exposure to speech they should have the right to hear if they want to. Is that seriously the argument you want to make? “Well, restricting access for adults is perfectly A-OK if it protects the kids.”

I didn’t say it was ok (nor was Mike arguing whether it was good or bad), I said we do it. Regardless of whether you think it’s a good idea or not, we do currently ban access to speech. It is currently enforced law. And has gone through SCOTUS multiple times, as well. It’s not a temporary thing just waiting to be overturned.

We do more than just bleeping on TV, as well. That’s just one example. Look at stuff like USC 1470, CIPA, the law in Ginsberg v New York etc. Obscenity/indecency laws are one of the few common ways we do actually restrict speech, particularly towards kids. And we do it a lot.

You’re literally agreeing that we do in fact ban access to speech.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...