David Boies’ Baseless Lawsuit Blames Meta Because Kids Like Instagram Too Much
from the that's-not-how-any-of-this-works dept
In today’s episode of ‘Won’t Someone Think of the Children?!’, celebrity attorney David Boies is leading a baseless charge against Meta, claiming Instagram is inherently harmful to kids. Spoiler alert: it’s not. This one was filed last month and covered in the Washington Post, though without a link to the complaint, because the Washington Post hates you. In exchange, I won’t link to the Washington Post story, but directly to the complaint.
The fact that David Boies is involved should already ring alarm bells. Remember, Boies and his firm, Boies Schiller Flexner, have been involved in messes like running a surveillance operation against Harvey Weinstein’s accusers. He worked with RFK Jr. trying to silence bloggers for their criticism. He was also on Theranos’ board and was part of the campaign of that company to punish whistleblowers.
Indeed, Boies’s string of very icky connections and practices has resulted in many lawyers leaving his firm to avoid the association.
So, I’m sorry, but in general, it’s difficult to believe that a lawsuit like this is anything more than a blatant publicity and money grab when David Boies is involved. He doesn’t exactly have a track record of supporting the little guy.
And looking at the actual complaint does little to take away from that first impression. I’m not going to go through all of this again, but we’ve spent the past few years debunking the false claim that social media is inherently harmful to children. The research simply does not support this claim at all.
Yes, there is some evidence that kids are facing a mental health crisis. However, the actual research from actual experts suggests it’s a combination of factors causing it, none of which really appear to be social media. Part of it may be the lack of spaces for kids to be kids. Part of it may be more awareness of mental health issues, and new guidelines encouraging doctors to look for and report such issues. Part of it may be the fucking times we live in.
Blaming social media is not supported by the data. It’s the coward’s way out. It’s old people screaming at the clouds about the kids these days, without wanting to put in the time and resources to solve actual problems.
It’s totally understandable that parents with children in crisis are concerned. They should be! It’s horrible. But misdiagnosing the problem doesn’t help. It just placates adults without solving the real underlying issues.
But this entire lawsuit is based on this false premise, with some other misunderstandings sprinkled in along the way. While the desire to protect kids is admirable, this misguided lawsuit will only make it harder to address the real issues affecting young people’s mental health.
The lawsuit is bad. It’s laughably, sanctionably bad. It starts out with the typical nonsense moral panic comparing social media to actually addictive substances.
This country universally bans minor access to other addictive products, like tobacco and alcohol, because of the physical and psychological damage such products can inflict. Social media is no different, and Meta’s own documents prove that it knows its products harm children. Nonetheless, Meta has done nothing to improve its social media products or limit their access to young users.
First of all, no, the country does not “universally ban” minor access to all addictive products. Sugar is also somewhat addictive, and we do not ban it. But, more importantly, social media is not a substance. It’s speech. And we don’t ban access to speech. It’s like an immediate tell. Any time someone compares social media to actual poisons and toxins, you know they’re full of shit.
Second, the “documentation” that everyone uses to claim that Meta “knows its products harm children” is the various studies which they used as part of an internal research team trying to help make the products safer and better for kids.
But because the media (and grandstanding fools like Boies) falsely portray it as “oh they knew about it!”, they are guaranteeing that no internet company will ever study this stuff ever again. The reason to study it was to try to minimize the impact. But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.
Much of the rest of this is just speculative nonsense about how features like “likes” and notifications are somehow inherently damaging to kids based on feels.
Meta is aware that the developing brains of young users are particularly vulnerable to certain forms of manipulation, and it affirmatively chooses to exploit those vulnerabilities through targeted features such as recommendation algorithms; social comparison functions such as “Likes,” “Live,” and “Reels”; audiovisual and haptic alerts (that recall young users to Instagram, even while at school and late in the night); visual filter features known to promote young users’ body dysmorphia; and content-presentation formats (such as infinite scroll) designed to discourage young users’ attempts to self-regulate and disengage from Instagram.
It amazes me how many of these discussions focus on “infinite scroll” as if it is obviously evil. I’ve yet to see any data that supports that claim. It’s just taken on faith. And, of course, the underlying issue with “infinite scroll” is not the scroll, but the content. If there were no desirable content, no “infinite scroll” is going to keep people on any platform.
So what they’re really complaining about is “this content is too desirable.”
And that’s not against the law.
Research shows that young people’s use of Meta’s products is associated with depression, anxiety, insomnia, interference with education and daily life, and other negative outcomes. Indeed, Meta’s own internal research demonstrates that use of Instagram results in such harms, and yet it has done nothing to lessen those harms and has failed to issue any meaningful warnings about its products or limit youth access. Instead, Meta has encouraged parents to allow their children to use Meta’s products, publicly contending that banning children from Instagram will cause “social ostracization.”
Again, this is false and misleading. Note that they say “associated” with those things, because no study has shown any causal reaction. The closest they’ve come is that those who are already dealing with depression and anxiety may choose to use social media more often. And that is an issue, and one that should be dealt with. But insisting that social media is inherently harmful to kids won’t help. The actual studies show that for most kids, it’s either neutral or helpful.
Supplying harmful products to children is unlawful in every jurisdiction in this country, under both state and federal law and basic principles of products liability. And yet, that is what Meta does every hour of every day of every year
This is nonsense. It’s not the product that’s harmful. It’s that there’s a moral panic full of boomers like Boies who don’t understand modern technology and want to blame Facebook for kids not liking them. Over and over again this issue has been studied and it has been shown that there is no inherent harm from social media. Claiming otherwise is what could do real harm to children by telling them the thing that they rely on every day to socialize with friends and find information is somehow evil and must be stopped.
Indeed, actual researchers have found that the real crisis for teens these days is the lack of social spaces where kids can be kids. Removing social media from those kids would only make that problem worse.
So, instead, we have a lawsuit backed by some of the most famous lawyers on the planet, pushing a nonsense, conspiracy-theory-laden moral panic. They argue that because kids like Instagram, Meta must be punished.
There’s a lot more in the actual lawsuit, but it only gets dumber.
If this lawsuit succeeds, it will be fair game on basically any popular app that kids like. This is a recipe for disaster. We will see tons of lawsuits, and apps aggressively blocking kids from using their services, cutting off tons of kids who would find those services useful and not problematic. It will also cut off kids from ways of communicating with family and friends, as well as researching information and learning about the world.
Filed Under: addiction, class action, david boies, hazardous materials, infinite scroll, moral panic, protect the children, toxins
Companies: instagram, meta


Comments on “David Boies’ Baseless Lawsuit Blames Meta Because Kids Like Instagram Too Much”
Meta will never be held accountable for TikTok.
This is one of the very, very few times in life when I will actually hand it to Rand Paul:
Re:
Yeah, I hate it when people who are hateful say things that are acceptable, but that’s just reality, nobody’s completely virtuous or completely evil.
But, on that subject, there was a Twitter account that regularly tweeted out the Declaration Of Independence. Because it was in the chunks required by Twitter and people saw parts of it before they realised what it was, they decided it was communist.
https://apnews.com/general-news-united-states-government-45c9fd6838a8450a849d95ff7daefa34
Given that, I have absolutely no doubt that some people would consider infinitely scrolling any of that stuff, or even the Bible, to be harmful because social media doesn’t guarantee you start at the beginning of the thread, and Karens don’t stop to consider context.
Man social media is so dangerous for kids*
*runs over kids I couldn’t see in my small dick enhancement truck as I use my phone while driving.
**chugs beer
If knowledge = liability then ignorance is the only safe position
How to spot a PR stunt lawsuit against social media: Rather than focus on any actual harms the lawsuit misquotes or ignores what studies have been done and simply asserts that social media is harmful, as though the accusation itself is it’s own proof.
But because the media (and grandstanding fools like Boies) falsely portray it as “oh they knew about it!”, they are guaranteeing that no internet company will ever study this stuff ever again. The reason to study it was to try to minimize the impact. But the fact that it leads to ridiculously misleading headlines and now lawsuits means that the best thing for companies to do is never try to fix things.
I’d call that ironic but that would require that those pushing the ‘Social media is the second coming of the anti-christ!’ fearmongering to actually care about children in the first place. Rather, given all they seem to care about is exploiting kids for their own gain ensuring that online platforms never try to study what is and is not good for their users seems closer to a feature rather than a bug for that lot.
Re:
This along with the third circuit ruling the other day really sounds like a roundabout way of leading us back to a pseudo-chaotic early internet.
Here’s to hoping that doesn’t come to pass.
Re: Re:
Oh no, not dial up.
Re: Re: Re:
Worst case scenario I’ll have no choice but to become an outdoors person, and with no form of social communication over the internet left, I’d even have to talk to people face-to-face.
The horror.
(jokes aside, I hope that scenario stays as only a thought in my mind.)
What do “kids” do more than anything else? Rag on their parents and other “grown-ups.” Naturally, lots of grown-ups want to put a stop to such antics. Hence, places that support this are “bad for the kids.” (From the “Seen but not heard” archives 😉
I wonder how long will it be before Republican lawmakers blame Facebook, Instagram, TikTok, and Twitter for the school shooting in Georgia today.
Re:
Negative five seconds? I’d be shocked if they haven’t already done so.
Re:
According to Wikipedia:
So I guess Georgia State Patrol learned a good lesson from the Uvalde school shooting.
This comment has been flagged by the community. Click here to show it.
I don’t see why this is a bad piece of evidence to use. Especially since companies like Meta generally restrict outside access to data, and you keep asking for evidence. This is one of the few ways to get said evidence, until/unless we mandate outside research access.
Obviously it doesn’t mean you should misuse it, but your suggestion of not using it at all is not reasonable, either. Companies releasing that data is going to lead to awkward conversations any time something bad comes out (and this is true even if it wasn’t misused. Accurately quoting those internal reports gives them an incentive not to do them. Which is why they’re so restrictive in the first place). But the solution can’t be to pretend it doesn’t exist, that’s obviously flawed.
If it wasn’t harmful in some way, there’d be nothing to improve. You can’t have it both ways.
That isn’t mutually exclusive with knowing about something. Never mind companies imperfect records with implementing changes from studies such as these ones (Meta has plenty of public ones).
I mean, when you boil it down, that’s essentially what addiction or over-use is, yes.
The reason there’s a focus on things like this, is because it’s an example (that is visible without seeing the back end) showing that companies are tailoring their products to maximize engagement on the platform. Particularly when those companies remove different options like chronological.
You can argue whether that’s “evil”, but it’s certainly a foot in the door to establishing some base facts.
Showing causality in social sciences is insanely difficult, even when it’s there. Unless you have the access and control to do something like diff in diff, it’s really hard to nail down. There’s a difference between looking and not finding it, and those studies not being well suited to answering causality to begin with.
Format matters. There’s a reason companies different content delivery methods. Yes, you need the content as well, but that just says you need both. It’s not just content. Same reason companies optimize things like slots (or flavoring in nicotine products, or making cigarettes more kid-friendly before that got banned). Yes, the underlying game matters, but those tweaks can make it meaningfully more engaging.
That’s an argument for legislation to change the incentives so they want to try to fix things. This is a policy choice.
We absolutely already do restrict children’s exposure to some speech.
Re:
Because it doesn’t actually exist. This alone negates anything else you follow up with in your screed.
This comment has been flagged by the community. Click here to show it.
Re: Re:
The article seems to be going further than just the misuse being the problem, hence why I mentioned it. If it had just said that, I wouldn’t have brought it up.
Except it doesn’t. Both because I specifically addressed not misusing it and particularly since most of my “screed” isn’t related to that point, and is applicable regardless.
The lawsuit can be bad, and misusing info that doesn’t show what it claims, while there are also other parts of the article that go too far. They’re not mutually exclusive.
Re: Re: Re:
If someone claims a study proves that social media directly causes distinct harms, but either the study doesn’t actually do that or the claimant refuses to name the distinct harms in clear detail, that isn’t a “misuse” of the study—it’s the knowing and deliberate use of vagueness to whip up fears and emotionally manipulate people.
Re: Re: Re:2
I’m not sure what distinction you’re making there. That’s misusing it? Yes, it’s a knowing and deliberate misuse
Re:
Functional people don’t argue that deranged hallucinations form a good basis for a lawsuit, but you do you.
This comment has been flagged by the community. Click here to show it.
Re: Re:
It’s a good thing I’m not arguing that, then.
It can be a bad lawsuit (it is) and the article also goes too far in some places trying to dunk on it. Just because it’s a lawsuit based on deranged hallucinations doesn’t mean we need to circlejerk so hard over it we get sloppy in our arguments.
Re: Re: Re:
No, it doesn’t. Shitty lawsuits based on hallucinations of what the complaining parties wish were facts deserve to be dunked on so thoroughly that anyone who isn’t similarly hallucinatory sees such lawsuits for what they are: desperate cashgrabs aimed at deep pockets.
Re: Re: Re:2
Why not, specifically?
Absolutely. My issue is not with the dunking itself (this lawsuit is fucking bad). Dunk away. But I think it would be better if it was done while being accurate.
Saying stuff like And we don’t ban access to speech when dunking on them, when it’s wrong (and obviously, factually so), doesn’t seem like it actually helps the dunking. It just hurts it. Not only do we ban access to speech, some laws are specifically targeted at kids. From various types of say, sexual content/obscenity/indecency laws (USC 1470 , CIPA, the law in Ginsberg v New York, etc). We ban access to speech all the time (whether that’s good or not is another story, but we do do it).
He’s making a valid point (we almost never restrict speech, even for minors), and then goes just a bit too far for that rhetorical flourish. It’s so close, and so easy to avoid.
I want to shit all over people like Boies, but I feel like those sorts mistakes detract from that. We shouldn’t be making mistakes like that, especially when we’re actively dunking on people like him for being wrong. We’re supposed to be better than people like him.
Re:
Booze, cigarettes, and gambling are not speech, and as for bleeping on TV, that restricts adults’ exposure to speech they should have the right to hear if they want to. Is that seriously the argument you want to make? “Well, restricting access for adults is perfectly A-OK if it protects the kids.”
Re: Re:
I didn’t say they were, and I specifically focused on speech.
I didn’t say it was ok (nor was Mike arguing whether it was good or bad), I said we do it. Regardless of whether you think it’s a good idea or not, we do currently ban access to speech. It is currently enforced law. And has gone through SCOTUS multiple times, as well. It’s not a temporary thing just waiting to be overturned.
We do more than just bleeping on TV, as well. That’s just one example. Look at stuff like USC 1470, CIPA, the law in Ginsberg v New York etc. Obscenity/indecency laws are one of the few common ways we do actually restrict speech, particularly towards kids. And we do it a lot.
You’re literally agreeing that we do in fact ban access to speech.
Re: Re: Re:
And in the UK, they restrict access to certain speech without restricting such access for adults. Search ‘TV watershed’ some time.
Boies has been at this game for years!
Don’t forget that for a time Boies was the lead attorney when Caldera/new-SCO was suing world+dog for supposed copyright and patent infringement in Linux, trying to revive USL’s old ‘mental contamination’ theory.
1) “Using Social Media is harmful! People who are depressed or isolated should not be allowed to use it!”
2) I, for one, find that news depressing.
3) “Then you may not use Social Media!”
4) I feel isolated
5) (go to 2)
Restrict use of social media to those who show signs of depression but allow the consumption of alcohol because reasons.
“Any time someone compares social media to actual poisons and toxins, you know they’re full of shit.”
Thus comparing comparisons to actual poisons and toxins.