The Social Media Addiction Verdicts Are Built On A Scientific Premise That Experts Keep Telling Us Is Wrong

from the we-keep-seeing-this-over-and-over dept

Last week, I wrote about why the social media addiction verdicts against Meta and YouTube should worry anyone who cares about the open internet. The short version: plaintiffs’ lawyers found a clever way to recharacterize editorial decisions about third-party content as “product design defects,” effectively gutting Section 230 without anyone having to repeal it. The legal theory will be weaponized against every platform on the internet, not just the ones you hate. And the encryption implications of the New Mexico decision alone should terrify everyone. You can read that post for more details on the legal arguments.

But there’s a separate question lurking underneath the legal one that deserves its own attention: is the scientific premise behind all of this even right? Are these platforms actually causing widespread harm to kids? Is “social media addiction” a real thing that justifies treating Instagram like a pack of Marlboros? We’ve covered versions of this debate in the past, mostly looking at studies. But there are other forms of expert analysis as well.

Long-time Techdirt reader and commenter Leah Abram pointed us to a newsletter from Dr. Katelyn Jetelina and Dr. Jacqueline Nesi that digs into exactly this question with the kind of nuance that’s been almost entirely absent from the mainstream coverage. Jetelina runs the widely read “Your Local Epidemiologist” newsletter, and Nesi is a clinical psychologist and professor at Brown who studies technology’s effects on young people.

And what they’re saying lines up almost perfectly with what we’ve been saying here at Techdirt for years, often to enormous pushback: social media does not appear to be inherently harmful to children. What appears to be true is that there is a small group of kids for whom it’s genuinely problematic. And the interventions that would actually help those kids look nothing like the blanket bans and sweeping product liability lawsuits that politicians and trial lawyers are currently pursuing. And those broad interventions do real harm to many more people, especially those who are directly helped by social media.

Let’s start with the “addiction” question, since that’s the framework on which these verdicts were built. Here’s Nesi:

There is much debate in psychology about whether social media use (or, really, any non-substance-using behavior outside of gambling) can be called an “addiction.” There is no clear neurological or diagnostic criteria, like a blood test, to make this easy, so it’s up for debate:

  • On one hand, some researchers argue that compulsive social media use shares enough features (loss of control, withdrawal-like symptoms, continued use despite harm) to warrant the diagnosis for treatment.
  • Others say the evidence for true neurological dependency is still weak and inconsistent because research relies on self-reported data, findings haven’t been replicated, and many heavy users don’t show true clinical impairment without pre-existing issues.

Her bottom line is measured and careful in a way that you almost never hear from the politicians and lawyers who claim to be acting on behalf of children:

Here’s my current take: There are a small number of people whose social media use is so extreme that it causes significant impairment in their lives, and they are unable to stop using it despite that impairment. And for those people, maybe addiction is the right word.

For the vast majority of people (and kids) using social media, though, I do not think addiction is the right word to use.

That’s a leading expert on technology and adolescent mental health, someone who has personally worked with hospitalized suicidal teenagers, telling you that for the vast majority of kids, “addiction” is the wrong word. And she has a specific, evidence-based reason for why that distinction matters — one that should be of particular interest to anyone who actually wants platforms held accountable for the kids who are being harmed.

Nesi argues that overusing the addiction label doesn’t just lack scientific precision. It actively weakens the case for meaningful platform accountability:

Preserving the precision of the addiction label — reserving it for the small number of kids whose use is genuinely compulsive and impairing — actually strengthens the case for platform accountability, rather than weakening it. It’s that targeted claim that has driven legal action and regulatory pressure. Expanding it to average use shifts focus from systemic design fixes to individual diagnosis, and dilutes the very argument that holds platforms responsible.

This is a vital point that runs counter to the knee-jerk reactions of both the trial lawyers and the moral panic crowd. If you say every kid using social media is an addict, you’ve made the concept of addiction meaningless, and you’ve made it dramatically harder to identify and help the kids who are actually suffering. You’ve also given platforms an easy out: if everyone’s addicted, then it’s just a feature of how humans interact with technology, and nobody is specifically responsible for anything. Precision is what creates accountability. Vagueness destroys it.

We highlighted something similar back in January, when a study published in Nature’s Scientific Reports found that simply priming people to think about their social media use in addiction terms — such as using language from the U.S. Surgeon General’s report — reduced their own perceived control, increased their self-blame, and made them recall more failed attempts to change their behavior. The addiction framing itself was creating a feeling of helplessness that made it harder for people to change their habits. As the researchers in that study put it:

It is impressive that even the two-minute exposure to addiction framing in our research was sufficient to produce a statistically significant negative impact on users. This effect is aligned with past literature showing that merely seeing addiction scales can negatively impact feelings of well-being. Presumably, continued exposure to the broader media narrative around social media addiction has even larger and more profound effects.

So we’re stuck with a situation where the dominant public narrative — “social media is addicting our children” — appears to be both scientifically imprecise and actively counterproductive for the people it claims to help. That’s a real problem. And it would be nice if the moral panic crowd would start to recognize the damage they’re doing.

None of this means there are no risks. Nesi is quite clear about that, drawing on her own clinical work:

A few years ago, I ran a study with adolescents experiencing suicidal thoughts in an inpatient hospital unit. Many of the patients I spoke to had complex histories of abuse, neglect, bullying, poverty, and other major stressors. Some of these patients used social media in totally benign, unremarkable ways. A few of them, though, were served with an endless feed of suicide-related posts and memes, some romanticizing or minimizing suicide. For those patients, it would be very hard to argue that social media did not contribute to their symptoms, even with everything else going on in their lives.

Nobody who has paid serious attention to this issue disputes that. There are kids for whom social media is a contributing factor in genuine mental health crises. The question has always been whether that reality justifies treating social media as an inherently dangerous product that harms all children — the premise on which these lawsuits and legislative bans are built.

The evidence consistently says no. When it comes to whether social media actually causes mental health issues, the newsletter is direct:

The scientific community has substantial correlational evidence and some, but not much, causal evidence of harm. Studies that randomly assigned people to stop using social media show mixed results, depending on how long they stopped, whether they quit entirely or just reduced use, and what they were using it for.

And:

It is still the case that if you take an average, healthy teen and give them social media, this is highly unlikely to create a mental illness.

This is consistent with what we’ve been reporting on for years, including two massive studies covering 125,000 kids that found either a U-shaped relationship (where moderate use was associated with the best outcomes and no use was sometimes worse than heavy use) or flat-out zero causal effect on mental health. Every time serious researchers go looking for the inherent-harm story that politicians keep telling, they come up empty.

One of the most fascinating details in the newsletter is the Costa Rica comparison. Costa Rica ranks #4 in the 2026 World Happiness Report. Its residents use just as much social media as Americans. And yet:

It doesn’t necessarily have fewer mental illnesses. And it certainly doesn’t have less social media use. What it has is a deep social fabric, and that may mean social media use reinforces real-world connections in Costa Rica, whereas in English-speaking countries, it may be replacing them.

In other words, cultural factors appear to be protective. The underlying challenges to social foundations — trust, connection, belonging, and safety — are what drive happiness. Friendships, being known by someone, the sense that you belong somewhere: these are the actual load-bearing pillars of mental health, more predictive of wellbeing than income, and more protective against mental illness than almost any intervention we have.

If social media were inherently harmful — if the “addictive design” of infinite scroll and autoplay and algorithmic recommendations were the core problem — Costa Rica would be suffering the same outcomes as the United States. They have the same platforms, same features, and same engagement mechanics. What actually differs is the strength of the social fabric, not the tools themselves.

This is a similar point I raised in my review of Jonathan Haidt’s book two years ago. If you go past his cherry-picked data, you can find tons of countries with high social media use where rates of depression and suicide have gone down. There are clearly many other factors at work here, and little evidence that social media is a key factor at all.

That realization completely changes how we should think about policy. If the problem is weak social foundations — not enough connection, not enough belonging, not enough adults showing up for kids — then banning social media or suing platforms into submission won’t fix it. You’ll have addressed the wrong variable. And in the process, you’ll have made the platforms worse for the many kids (including LGBTQ+ teens in hostile communities, kids with rare diseases, teens in rural areas) who rely on them for the connection and community that their physical environment doesn’t provide.

Nesi’s column has some practical advice that is pretty different than what that best selling book might tell you:

If you know your teen is vulnerable, perhaps due to existing mental health challenges or social struggles, you may want to be extra careful.

If your teen is using social media in moderation, and it does not seem to be affecting them negatively, it probably isn’t.

That sounds so obvious it feels almost silly to type out. And yet it is the exact opposite of the approach we see in the lawsuits and bans currently dominating the policy landscape, which assume social media is a universally dangerous product requiring universal restrictions.

The newsletter closes with a key line that highlights the nuance that so many people ignore:

Social media may be one piece of the puzzle, but it’s certainly not the whole thing.

We’ve been making this point at Techdirt for a long time now, often in the face of considerable hostility from people who are deeply invested in the simpler narrative. I’ve written about Danah Boyd’s useful framework of understanding the differences between risks and harms, and how moral panics confuse those two things. I’ve covered so many studies that find no causal link that I’ve lost count. I’ve pointed out how the “addiction” framing may be doing more damage than the platforms themselves.

That’s why it’s encouraging to see credentialed, independent researchers — people who work directly with the most vulnerable kids — end up in the same place through their own work. Because this conversation desperately needs more voices willing to acknowledge both realities: that some kids are genuinely harmed and need targeted help, and that the sweeping narrative of universal social media harm is not supported by the science and leads to policy responses that may hurt far more people than they help.

The kids who are in that small, genuinely vulnerable group deserve interventions designed for them — better mental health funding and access along with better identification of at-risk youth. What they don’t deserve is to have their suffering used as a blunt instrument and a prop to reshape the entire internet through lawsuits built on a scientific premise that the actual scientists keep telling us is wrong.

Filed Under: , , , , ,
Companies: meta, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Social Media Addiction Verdicts Are Built On A Scientific Premise That Experts Keep Telling Us Is Wrong”

Subscribe: RSS Leave a comment
24 Comments
Anonymous Coward says:

I feel like I’m missing something here. Isn’t 230 about platforms being content neutral?
Like, if it was just a timeline of all the shit posts from people you follow, I agree that should be protected.

What Meta and Alphabet do is curate what you see. They decide what the algorithm surfaces to you.
“Here’s people having fun without you, here’s people angry at something you get angry about, now here is an ad that if you buy it, we promise you’ll feel better.”
That’s not content neutral.

They also literally design their systems to be addicting as possible. Sure, some are more vulnerable to that and people do have some level of personal accountability. But this seems, to my read, malicious on the part of BigTech.

Was every case of lung cancer caused by cigarettes? No, but the tobacco companies still paid out the nose for selling an known addictive product.

Again, maybe I’m misunderstanding, but I don’t have an issue with these verdicts. It’s not “banning” Facebook, it’s saying they have to stop making it intentionally addictive and harmful.

HotHead (profile) says:

Re: Does it make sense to bring up "content neutral"?

What Meta and Alphabet do is curate what you see. They decide what the algorithm surfaces to you.

That’s not content neutral.

The First Amendment restricts the government and government actors, not private actors. With few exceptions, the government must be content neutral. With few exceptions, the government cannot force private actors to be content neutral. In most cases, there is no legal relevance to whether Meta or Alphabet are content neutral in their moderation or other editorial decisions.

“Here’s people having fun without you, here’s people angry at something you get angry about, now here is an ad that if you buy it, we promise you’ll feel better.”

The ad may “promise” something seemingly similar to “you’ll feel better”, but no legally enforceable promise was made or the “you’ll feel better” was actually a carefully worded “you might feel better”. More importantly, Meta and Alphabet are not the ones making the “promise” in the ad.

Arianity (profile) says:

often to enormous pushback: social media does not appear to be inherently harmful to children. What appears to be true is that there is a small group of kids for whom it’s genuinely problematic.

I still feel like you would do yourself a massive favor to use a word like universal. Inherent does not mean the way you’re suing it. A “small group of kids for whom it’s genuinely problematic” can still be inherent. They’re not synonyms.

if everyone’s addicted, then it’s just a feature of how humans interact with technology, and nobody is specifically responsible for anything.

I don’t know if that’s true. To use a hated example, basically ~everyone gets addicted to nicotine. There was still liability, especially for companies who deliberately did things like using higher nicotine doses. Same for e.g. hazardous chemicals, etc. Being universal doesn’t mean no responsibility.

That said, it does genuinely muddy the waters to just throw around the term “addicted” loosely, and it would be better if we didn’t. It gets tough though, because we don’t really have words for it, so “addicted” gets used as slang for “uses it a bit too much”

The question has always been whether that reality justifies treating social media as an inherently dangerous product that harms all children — the premise on which these lawsuits and legislative bans are built.

That’s not the only question. The question is also whether if it harms only a few children, are lawsuits/bans warranted. We do do bans for things that only affect some people. It does not need to harm all to warrant a blunt response. It makes the bar a heavier lift, but it is not insurmountable.

If the problem is weak social foundations — not enough connection, not enough belonging, not enough adults showing up for kids — then banning social media or suing platforms into submission won’t fix it.

Addressing the cause is better, but addressing the symptom can still be helpful in the short term. There’s nothing wrong with using a bandage to staunch the bleeding before you get to surgery.

That sounds so obvious it feels almost silly to type out.

I think a really important point is, something can be obvious, but that doesn’t mean it’s easy. “have good parents” is really obvious! It’s not easy. Most of our problems with social media could maybe be fixed with attentive parents that allow us to target the affected kids. The problem is, no one seems to really know how to get there, and even ‘good’ parents seem to fail on this front.

Sometimes in society, we resort to blunt tools when targeting would be better. You wouldn’t need a drinking, gambling, or driving age if parents were responsible, either. But they’re not. We also have in-between broad rules, like R-rated movies, or allowing parents to serve alcohol at home.

Ngita (profile) says:

Re:

Some children are harmed, most children are neutral, some children are befitted.

Modern children probably need more exercise. pretty sure you could find studies about the increase in childhood obesity,if they are forced to exercise some children will be harmed ie strains and sprains. You could almost say kids are addicted to not exercising.

Should they add laws that make it illegal to make kids exercise?

hmm I think I got my addiction the wrong way around from the law.

But at one end we have physical addiction and the other we have habit. ” he always reads his morning paper with breakfast”

Social media addiction is somewhere on that spectrum but the ai suggests words like compulsion and fixation.but modern propaganda is always use the worst sounding word you can use and twist its meaning to fit. I guess we are lucky they did not come up with social media nazism.

glenn says:

I blame social media… for providing users with everything they seem to be looking for–based on what they keep looking at. Yes, I blame social media… for being everything that we ask it to be. That’s right: shame on them! It’s all their fault that I can’t just stop and get back to living my own life instead of being invested in what everyone else is doing.

Anonymous Coward says:

We’ve abandoned science and the scientific method in favor of opinion, suspicion, conspiracy, and hate. If I think it’s true, that’s all that matters, and your opinion is misinformed, biased, stupid, irrational, and wrong! Now that I’ve fooled a judge into allowing my opinion to become fact, the only question is how to make a lot of money quickly, before another judge declares it’s nonsense.

Pedestrian Humanology (profile) says:

The Weekly Mike Masnick Pushing the Rope Uphill Story

Unfortunately, the battle is against confirmation bias, so counter information has a very hard time convincing people to actually consider aspects of this logically. Too many people are convinced they “know” that social media is a harm and their imaginations convince them that everyone else agrees with them. And they want something done about it; something, anything. I continue to be surprised by the large number of seemingly normal and thoughtful people I’ve encountered that hold up Haidt as a shining light.

Ngita (profile) says:

Re:

Too many people remember that time they stayed up for an hour scrolling.

Ahh it was because social media addiction, totally their fault not my indifference to thinking I should stop and go to sleep.

I have stayed up late scrolling, I have also stayed up to watch tv, play games, read books and even played volleyball on summers nights until it was to late to see the white ball let alone other other players. are they all addicting?

Taran Rampersad (user link) says:

addiction vs. consent.

This has been bothering me for a while because addiction itself has never been on my radar as much.

Apparently, there is a reason why.

My main concern has been about privacy related issues and the intention economy, and how it impacts society.

Things that aren’t actually addiction but can be made worse by addictions.

It might be worth looking at those with true addiction and what content they interact most with. The nebulous impact of different types of content might reveal some patterns.

It is good that addiction is not as common place.

The intent, though, was pretty well documented in the case in how centralized social media attempts to cultivate it.

That was how the cases were judged successful.

It’s also worth taking a moment for those celebrating the cases to look at other harms, and other things hidden under implicit consent of social media use.

Thanks for the forced cookie, BTW. 🤣

Consent.

Alex Tolley says:

Blaming the victim?

In this 2nd shot at pushing back at the recent legal wins aboout social media causing harms, Masnick appears (to me) to be framing this as the victim’s (or even society’s) problem.

I note that he accepts gambling is a non-substance addiction. Gambling can be addictive, but the problem is not that this is so, but the harms are increased by methods to increase the user to gamble more, whether in a casino by offering alcoholic drinksm or by pestering notifications on gambling platforms and via texts and emails.

Do Facebook and YouTube engage in such actions to increase use? Yes, they do. But worse, as they know “engagement” is increased by stimulating emotions, they use algorithms to increase that engagement. YouTube, as any user is aware, pushes you to try more of what you have just watched. There are innumerable anecdotal examples where YouTube pushes ever darker content at the user if traveling down some “rabbit hole”.

Neither company needs to do this. If they were illegal chemicals, they would be the equivalent of offering ever more “dime bags” of increasingly potent substances. Most people can drink alcohol safely. A fraction cannot, and become addicted. The withdrawal includes not attending gatherings where alcohol consumption is expected, especially in bars and pubs. Governments have also banned teh advertising of strong alcohol in media.

I submit that teh appropriate way to handle this is to test for 2 things. Is harm from social media at least partially the cause of a result? Does teh platform use algorithms to actively push more “harming content” to the user? This last can be readily tested by the history of content and its position in a feed. If the content, e.g., related to suicide, is increased by teh algorithm, then the platform is guilty of “pushing”, and as with dealers pushing dangerous substances, punishment needs to be meted out to the guilty.

How to reduce this algorithmic behavior at scale is an issue. I think the current age verification for content is unacceptably intrusive and not like showing ID in a liquor store or bar. However, increasingly pushing certain content to a user is a problem. Is it beyond the wit of big data platforms to use historical data to determine the danger signs of bad outcomes and have teh algorithm back off? Not nontrivial to be sure, but let’s see some analysis of the data from academia to determine if this might be worthwhile pursuing.

But let us not try to say teh victim was to blame, or the parents, or society as a whole. While that may even be true, shouldn’t government make laws and offer treatments to try to reduce the problem and provide a safety net where possible?

Anonymous Coward says:

Re:

The goddamned government playing victim is a laugh riot when they’re the most powerful of all. You’re falling prey to do the politicians syllogism: we should do something, this is something therefore we should do it. That the costs are unreasonable is the whole reason for the pushback. Especially when the benefits are dubious.

Dister (profile) says:

Re:

I submit that teh appropriate way to handle this is to test for 2 things. Is harm from social media at least partially the cause of a result? Does teh platform use algorithms to actively push more “harming content” to the user? This last can be readily tested by the history of content and its position in a feed. If the content, e.g., related to suicide, is increased by teh algorithm, then the platform is guilty of “pushing”, and as with dealers pushing dangerous substances, punishment needs to be meted out to the guilty.

Exactly. This is in fact what the defect design case was doing. Identify a harm, identify an act (or set of acts), show that the act(s) is the proximal cause of the harm. Specifically, the plaintiff had mental health harms, the defendants (Meta and Youtube) had certain includes related to decisions on how their platforms are designed, and the plaintiff was able to show with sufficient proof to satisfy a jury that those acts cause the harms.

BierOnTap says:

The civil litigation avalanche is plain and simple a Section 230 circumvention campaign designed to do to social media platforms in the US what is coming to the platforms in major international markets through regulation (from the EU’s DSA and the UK’s OSA, for example). For years you’ve correctly pointed out the weakness of the argument that social media is generally addictive or generally harmful, including for teens. Those arguments, plus the serious partisan disagreements between D’s and R’s over how they actually want to police social media, have stifled federal legislation. And then the 1st Amendment has blocked many state laws, although less so recently.

So now we have the prospect that civil juries made up of citizens flooded with (incorrect) messages for a decade that social media, and Big Tech in particular, are dangerous and evil, making decisions that will create many billions in liabilities for the companies. Settlement talks are likely coming, with plaintiffs, including State AGs, pushing for substantial changes to how the platforms operate. They will likely ask for the very things that will be imposed by regulators in places like the EU and UK, and in the state laws being blocked by federal judges. But is there a constitutional defense if the platforms “voluntarily” agree to the changes in legal settlements? Can juries be a lynchpin to roll federal law and the 1st Amendment?

Dister (profile) says:

reshape the entire internet through lawsuits built on a scientific premise that the actual scientists keep telling us is wrong

This is a strawman. It seems to be suggesting that a particular lawsuit creates a regulatory framework akin to actual policymaking. It does not. That is not how product liability, or even civil liability more generally, works. The recent defective design case against Meta and Youtube stands for liability for a particular set of acts that found to have a causal relationship with the harms experienced by a particular plaintiff that a jury found the defendants should reasonably have foreseen. While this can show a way for other plaintiffs to succeed against social media companies, it is not the same as actual policymaking. As such, the plaintiff could win in that lawsuit while the following still being true:

If you know your teen is vulnerable, perhaps due to existing mental health challenges or social struggles, you may want to be extra careful.
If your teen is using social media in moderation, and it does not seem to be affecting them negatively, it probably isn’t.

Social media may be one piece of the puzzle, but it’s certainly not the whole thing.

Indeed, these above passages to which Mike cites include an acknowledgement that certain people may be harmed. The lawsuit found that the plaintiff was one such person. That is enough. Whether it is widespread is immaterial and, quite frankly, misleading. All that needs to be shown is “proximal cause” between a negligent or reckless act and the harm experienced by the plaintiff.

I get that Mike doesn’t want social media companies to bear responsibility, but this is how civil liabilities works for everyone else, and even if one disagrees with the policy implications, it is not for the judge or jury on any particular case to set that policy. And we wouldn’t want them to either. The judicial system is there to apply the law, not make it, and it would be unworkable to have every jury and every judge in the nation arguing with each other over whether a harm is widespread enough to decide in favor of an otherwise meritorious case.

The fact that professor Nesi explicitly states that there are at-risk users and that social media is a piece of the puzzle is an acknowledgement that such meritorious cases do in fact exist. Again, Mike may disagree on a policy level that a social media company should be responsible for that, but the correct avenue to create that immunity is legislation that would remove this cause of action. And I am sorry, Section 230 is not that legislation because this lawsuit was not decided on the grounds of any particular content.

In the same way that calling heavy usage of social media “addiction” is misleading and makes solving the problem of actual addiction more difficult, conflating a particular lawsuit with a “blanket ban” founded on inherent and widespread harm makes solving the challenge of liability and balancing the interests of the companies and those potentially harmed by them much more difficult. A lawsuit is not the same as a “blanket ban.” A particular finding of harm caused by a particular act under particular facts is not commentary on how “widespread” or “inherent” that harm is. I think Mike makes a great point that community based solutions could be a better way to solve these issues on the societal and policy level. But lawsuits are not the policy level and different considerations are at play in every case. No one is served by conflating the two. If Mike has a problem with the way the jury decided, Mike should address the facts that the Jury were exposed to. Not some amorphous strawman of “widespread” or “inherent” rationales for supposed legislative bans.

Anonymous Coward says:

On one hand, some researchers argue that compulsive social media use shares enough features (loss of control, withdrawal-like symptoms, continued use despite harm) to warrant the diagnosis for treatment.

Ah, so just like having meatspace friends, if that’s even still possible, given the lack of places to do anything. So, uh, what are they going to do about banning friends?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...