Judge Seems Skeptical That California’s Age Appropriate Design Code Is Compatible With The 1st Amendment

from the fingers-crossed dept

We’ve talked a few times about California’s “Age Appropriate Design Code.” This is a bill in California that was “sponsored” and pushed for by a UK Baroness (who is also a Hollywood filmmaker and has fallen for moral panic myths about evil technology). As we explained there is no way for a site like Techdirt to comply with the law. The law is vague and has impossible standards.

While the law says it does not require age verification, it does in effect. It says you have to reasonably “estimate” the age of visitors to a website (something we have zero ability to do, and I have no desire to collect such info), and then do an analysis of every feature on our website to see how it might cause harm to children, as well as put together a plan to “mitigate” such harm. If a site refuses to do “age estimation” (i.e., verification) then it must implement policies that apply to every visitor that tries to mitigate harm to minors.

As professor Eric Goldman noted, this bill is a radical experiment on children, from a legislature that claims it’s trying to stop radical experiments on children. As I discussed earlier this year, I submitted a declaration in the lawsuit to invalidate the law, filed by the trade group NetChoice, explaining how the law is a direct attack on Techdirt’s expression.

This past Thursday afternoon, I went to the courthouse in San Jose to watch the oral arguments regarding NetChoice’s motion for a preliminary injunction. I was pretty nervous as to how it would go, because even well-meaning people sometimes put up blinders when people start talking about “protecting the children,” never stopping to look closely at the details.

I came out of the hearing very cautiously optimistic. Now, I always say that you should never read too much into the types of questions a judge asks during oral arguments, but Judge Beth Labson Freeman (as I’ve seen her do in other cases as well) kicked off the hearing by being quite upfront with everyone, telling them where her mind was after reading all the filings in the case. In effect, she said the key issue in her mind, was whether or not the AADC actually regulates speech. If it does, then it’s probably an unconstitutional infringement of the 1st Amendment. If it does not, then it’s probably allowed. She specifically said, if she determines that the law regulates speech, then the law is clearly not content neutral, which it would need to be to survive strict scrutiny under the 1st Amendment.

So she asked the attorneys to focus on that aspect, though said there would be time to cover some of the other arguments as well.

She also noted that, of course, keeping children safe online was a laudable goal, and she was sure that everyone supported that goal. And she noted that the fact that the law was passed unanimously “weighed heavily” on her thinking. However, at the end of the day, her role is not to determine if the law is a good law, but just if it’s constitutional.

While California argued that the law doesn’t impact speech, and only “data management,” the judge seemed skeptical. She pushed back multiple times on California Deputy Attorney General Elizabeth Watson, who handled the arguments for the state. For NetChoice, Ambika Kumar pointed out how nearly every part of the law focused on content, and even that the declarations the state offered up from “experts,” as well as statements made by state officials about the law, all focused on the problems of “harmful content.”

The state kept trying to insist that the law only applied to the “design” of a website, not the content, but the judge seemed skeptical that you could raw that line. At one point, she noted that the “design” of the NY Times includes the content of the articles.

California tried to argue that the requirements to do a Data Protection Impact Assessment (DPIA) for every feature was both simple and that since there was no real enforcement mechanism, you couldn’t get punished over having every DPIA just say “there’s no impact.” They also claimed that while it does require a “timed plan” to “mitigate or eliminate” any risk, that again, it was up to the sites to determine what that is.

This left Judge Freeman somewhat incredulous, saying that basically the state of California was telling every company to fill out every DPIA saying that there was no risk to anything they did, and if they did see any risk to create a plan that says “we’ll solve this in 50 years” since that is a “timed plan.” She questioned why California would say such a thing. She highlighted that this seemed to suggest the law was too vague, which would be a 1st Amendment issue.

The judge also clearly called out that the law suggests kids accessing harmful content should be prevented, and questioned how this wasn’t a speech regulation. At one point she highlighted that, as a parent, what if you want your kids to read stories in the NY Times that might upset a child, shouldn’t that be up to the parents, not the state?

The state also kept trying to argue that websites “have no right” to collect data, and the judge pointed out that they cite no authority for that. The discussion turned, repeatedly, to the Supreme Court’s ruling in Sorrell v. IMS Health regarding 1st Amendment rights of companies to sell private data regarding pharmaceuticals for marketing. The judge repeatedly seemed to suggest that Sorrell strongly supported NetChoice’s argument, while California struggled to argue that case was different.

At one point, in trying to distinguish Sorrell from this law, California argued that Sorrell was about data about adults, and this bill is about kids (a “won’t you just think of the children” kind of argument) and the judge wasn’t buying it. She pointed out that we already have a federal law in COPPA that gives parents tools to help protect their children. The state started to talk about how hard it was for parents to do so, and the judge snapped back, asking if there was a 1st Amendment exception for when things are difficult for parents.

Later, California tried again to say that NetChoice has to show why companies have a right to data, and the judge literally pointed out that’s not how the 1st Amendment works, saying that we don’t “start with prohibition” and then make entities prove they have a right to speak.

Another strong moment was when the judge quizzed the state regarding the age verification stuff. California tried to argue that companies already collect age data (note: we don’t!) and all the law required them to do was to use that data they already collected to treat users they think are kids differently. But, the judge pushed back and noted that the law effectively says you have to limit access to younger users. California said that businesses could decide for themselves, and the judge jumped in to say that the statute says that companies must set defaults to the level most protective of children, saying: “So, we can only watch Bugs Bunny? Because that’s how I see it,” suggesting that the law would require the Disneyfication of the web.

There was also a fair bit of discussion about a provision in the law requiring companies to accurately enforce their terms of service. NetChoice pointed out that this was also a 1st Amendment issue, because if a site put in their terms that it does not allow speech that is “in poor taste,” and the Attorney General enforces the law, saying that the site did not enforce that part of its terms, then that means the state is determining what is, and what is not, in poor taste, which is a 1st Amendment issue. California retorted that there needs to be someway to deal with a site saying that it won’t collect data, but then it does. And the judge pointed out that, in that case there might be a breach of contract claim, or that the AG already has the power to go after companies using California’s Unfair Competition Law that bars deceptive advertising (raising the question of why they need this broad and vague law).

There were some other parts of the discussion, regarding if the judge could break apart the law, leaving some of it in place and dumping other parts. There was a fair amount of discussion about the scrutiny to apply if the judge finds that the law regulates speech, and how the law would play out under such scrutiny (though, again, the judge suggested a few times, that the law was unlikely to survive either intermediate scrutiny).

There was also some talk about the Dormant Commerce Clause, which the Supreme Court just somewhat limited. There, NetChoice brought up that the law could create real problems, since it applies to “California residents,” and that’s true even if they’re out of state. That means, the law could conflict with another law where a California resident was visiting. Or create a situations where companies would need to know a user was a California resident even if out of state.

The state tried to brush this aside, saying it was such a kind of edge case, and suggested it was silly to think that the Attorney General would try to enforce such a case. This did not impress the judge who noted she can’t consider the likelihood of enforcement in reviewing a challenge to the constitutionality of the law. She has to assume that the reason the state is passing a law is that it will enforce every violation of the law.

Section 230 was mostly not mentioned, as the judge noted that it seemed too early for such a discussion, especially if the 1st Amendment handled the issue. She did note that 230 issues might come up if she allowed the law to go into effect and the state then brought actions against companies, they might be able to use 230 to get such actions dismissed.

Also, there was a point where, when exploring the “speech v. data” question, NetChoice (correctly) pointed out that social media companies publish a ton of user content, and the judge (incorrectly) said “but under 230 it says they’re not publishers,” leading Kumar to politely correct the judge, that it says you can’t treat the company as a publisher, but that doesn’t mean it’s not a publisher.

At another point, the judge also questioned how effective such a law could be (as part of the strict scrutiny analysis), noting that there was no way kids were going to stop using social media, even if the state tried to ban it entirely.

As I said up top, I came out of the hearing cautiously optimistic. The judge seemed focused on the 1st Amendment issues, and spent the (fairly long!) hearing digging in on each individual point that would impact that 1st Amendment analysis (many of which I didn’t cover here, as this is already long enough…).

The judge did note she doesn’t expect her ruling to come out that quickly, and seemed relieved that the law doesn’t go into effect until next summer, but NetChoice (rightly!) pointed out that the effects of the law are already being felt, as companies need to prepare for the law to go into effect, which seemed to take the judge by surprise.

There’s also a bit more supplemental briefing that the judge requested, which will take place next month. So… guessing it’ll be a while until we hear how the judge decides (at which point it will be appealed to the 9th Circuit anyway).

Filed Under: , , , , , , , , , , ,
Companies: netchoice

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Judge Seems Skeptical That California’s Age Appropriate Design Code Is Compatible With The 1st Amendment”

Subscribe: RSS Leave a comment
50 Comments

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Doublespeak

said “but under 230 it says they’re not publishers,” leading Kumar to politely correct the judge, that it says you can’t treat the company as a publisher, but that doesn’t mean it’s not a publisher.

This is the sleazy side of lawyers that most normal people strongly dislike. When you can simultaneously argue that someone is both a publisher and not a publisher, all principle has gone out the window. Regarding user content, the platform is NOT the publisher.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

Oddly enough, all Anonymous Cowards are the same. Except the ones that aren’t.

Then again, so are all “Jhon Smith”s, which you might recognize as a nominal name akin to “John Doe”.

But since all ACs are the same, I’m debating myself … and losing. (And winning, but it’s the losing that hurts.)

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2

Want to try and enter the ranks of troll-dom like Jhon? Stick around….

Again, I’m not sure what “Jhon” has to do with my posting. I’m my own person, and very firm and unrepentant in my belief that radical gender ideology is a threat more immediately serious to vulnerable children and young people than any purported existential threat to the human race, and thus, any individual or organization/company/gov’t that encourages or facilitates it must be destroyed (or at least replaced).

And I think transgender people, while obviously mentally-ill, are still irredeemable.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:5

A greater number of women are realizing that relations with men are meaningless and only cause them to be limited. It saddles them with restrictions of traditional marriage and stay-at-home roles, where instead they should be entitled to career development and bodily autonomy. In which case, lesbianism is the logical, rational and superior option. Men would choose to ignore and rebel against that, but that’s mostly because they are a collection of ignorant boors that evolution is slowly but surely wiping out of the gene pool.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Any law that purports to “safeguard children” by attacking speech isn’t about safeguarding children⁠—it’s about using that phrase to justify the infantilizing of adults.

Yes, there is plenty of speech on the Internet to which children generally shouldn’t be exposed (e.g., porn, hate speech, defenses of Joss Whedon). But demanding age gates on and/or the “age-appropriate” sanitizing of This God Damn Internet™ is not how to properly “safeguard” children from that speech. If anything, the government should have little-to-no role in overriding a parent’s decision on what kids can access. A parent that’s fine with letting their kid browse the fully uncensored Internet will certainly have their parenting called into question (as well they should). But that alone shouldn’t give the government any right to make the Internet more “family-friendly” by way of trying to route around the First Amendment.

Which one is more important to you: having the government safeguard children at all costs, or having the government respect the First Amendment?

This comment has been flagged by the community. Click here to show it.

Ariella says:

Re: Re:

Stephen – the problem is that parents do not have control over what kids do on devices that they do not own. For example, schools give kids access to iPads and chrome books for school work and kids go ahead and download social media sites, which we know have a direct correlation with the dramatic rise in teen mental health crises. Read Jonathan Haidt’s sub stack to learn more: https://jonathanhaidt.substack.com/p/social-media-not-safe-kids

As a country, we have figured out ways to protect kid’s health when parental control was not always possible. Eg: laws on the sales of cigarettes or alcohol. Curious what you would recommend.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

Here’s a hint:

ACTUALLY PARENT.

Siddown with your kids and spend time with them and all that rot. If you don’t know, GOOGLE THAT SHIT TOGETHER. DO STUFF TOGETHER WHERE REASONABLE. BE THERE FOR THEM REGARDLESS OF WHAT THEY DO AS LONG AS IT’S NOT SELF-HARMING OR LEGITIMATELY DANGEROUS FOR THEM WITHOUT SAFETY GEAR OR WHATNOT.

Oh, and MAKE SOCIETY LESS SHIT FOR THEM TOO WHILE WE’RE AT IT.

This comment has been deemed insightful by the community.
Mr. Blond says:

Re: Re: Re:2

“For example, schools give kids access to iPads and chrome books for school work and kids go ahead and download social media sites, which we know have a direct correlation with the dramatic rise in teen mental health crises.”

Correlation is not causation. Perhaps teens with mental health are more likely to seek out social media. Or another factor (academic/extracurricular pressure, for example, and that little matter of a global pandemic) being the actual cause.

Just like Brown v. EMA talked about violence being a theme in children’s and teens’ literature (Grimm’s Fairy Tales, The Odyssey, Lord of the Flies) for centuries, suicide is similarly mentinoned, often in a romantic or glamorized way, and often in academic settings. Romeo and Juliet have two teenage lovers who kill themselves rather than be separated. Hamlet’s “To be or not to be” soliloquy has the titular character contemplating if suicide is preferable

(Whether ’tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing end them. To die—to sleep)

Anna Karenina throws herself under a train rather than endure the “stupidity” of life and love. And that’s not to mention how the topic is dealt with in popular culture. For example, MASH‘s theme song is called “Suicide is Painless” and a subplot of the movie involves the humorous failed suicide attempt of the camp’s dentist. And think back to just about every song in the 90’s: Nirvana, Pearl Jam, Alice in Chains and Soundgarden regularly trafficked in themes of depression, hopelessness, isolation, addiction and nihilism. You can’t say that social media would be giving kids anything different.

For the alcohol and tobacco analogy, to paraphrase Pulp Fiction, it’s not the same ballpark, it’s not the same league, it’s not even the same sport. Even putting aside that the former does not involve speech, we can objectively measure the presence or absence of cancer from tobacco. Psychology is not always an objective science, and attempts to show causation of mental issues are very limited.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

parents do not have control over what kids do on devices that they do not own

Then parents can ask those who own those devices what safeguards they have in place on those devices. We don’t need the government essentially age-gating all of the Internet because a parent isn’t willing to have a conversation with someone else.

Read Jonathan Haidt’s sub stack

I’d rather read something good, thanks.

we have figured out ways to protect kid’s health when parental control was not always possible. Eg: laws on the sales of cigarettes or alcohol.

Even if social media is harmful for children, that doesn’t justify any law that goes for the First Amendment’s throat.

Curious what you would recommend.

I’d recommend parents do their fucking job and have conversations with their kids about what they’re doing on the Internet. I’d recommend parents talk with other parents about how they keep track of what their kids are doing on the Internet. I’d recommend parents talk with school officials about the devices being used in schools and what kids are allowed to access through those devices. I’d even recommend that schools do their best to set up filters (including HOSTS file blocks, if possible) so their devices don’t allow access to the full uncensored Internet⁠—even if those filters would be trivially easy to undermine.

What I wouldn’t recommend is gutting the First Amendment for the sake of some “think of the children” bullshit. Would you?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

…the problem is that parents do not have control over what kids do…

You could stop right there.
Parents never have had complete control over their kids, and never will.

Problem is, kids are autonomous. Most folks would be upset if you declared that they had to be programmed.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

[W]e have figured out ways to protect kid’s health when parental control was not always possible. Eg: laws on the sales of cigarettes or alcohol.

And I’m sure you’ll be happy to show us in the Constitution, or 1A, exactly where cigarettes and/or alcohol are mentioned, let alone protected from any Congressional laws., yes? Oh, wait….. it turns out that neither cigarettes nor alcohol given any kind of ‘get out jail free’ card. so Congress can pass all the laws they might wish, vis-a-vis those two products.

Speech, on the other hand….. (See 1A for absolutely crystal clear instructions on how to treat speech.) As noted by another poster in a different topic/thread/comment, the Constitution does not grant the government the right to make laws “for the protection” of any portion of the population if doing so contravenes Constitutional rights enumerated for the entire population.

Popehat really needs to write another exposition on “Hello, you’ve been referred here because you’re wrong about how to protect children on the internet.”

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re:

To anyone who wants to claim that content needs to be restricted and prohibited and then tries to defend that by claiming that it’s For The Children(tm) my first question to them would be ‘Okay, should children have access to the bible?’

The answer to that would tell you a lot about how much they actually want to protect children from violent and/or sexual content versus how much they’re just using children as an excuse to justify the censorship they want.

This comment has been deemed insightful by the community.
Anonymous Coward says:

FWIW, I just got off the phone with my California state senator’s office and referred them to Techdirt’s articles on this bill’s constitutionality, and your articles on SB 680 (the social media addiction bill that’s currently in appropriations in the state assembly). The person who took my call said I piqued her interest and she’ll be doing some more research on SB 680, starting with reading Techdirt’s coverage.

I encourage anyone in California to bust their state assemblyperson’s door down to kill SB 680 before it can get to the floor for a vote.

This comment has been flagged by the community. Click here to show it.

Ariella Feldman (user link) says:

How do we reverse the teen mental health crisis?

I would be curious to know what measures you would recommend be taken to reverse the teen mental health crisis. Suicides, rates of eating disorders and depression/anxiety are on the rise globally – and have risen in lock step with features such as the introduction of the Like button, the front facing camera, and TikTok.
It’s clear that social media is not good for kids, but right now there is no mechanism for verifying age on any social media site and the media giants have not taken any significant steps to ameliorate the content being shown to kids. If you need proof about the abysmal state of teen mental health in the age of social media, just read Jonathan Haidt’s sub stack: https://jonathanhaidt.substack.com/p/social-media-not-safe-kids

This comment has been deemed insightful by the community.
Mr. Blond says:

Re:

These findings only show a correlation, not cause and effect. Haidt’s findings are further scrutinized here:

https://reason.com/2023/03/29/the-statistically-flawed-evidence-that-social-media-is-causing-the-teen-mental-health-crisis/

One possible factor could be the pressure on today’s kids to succeed. The goal for my parents’ generation used to be “get into college.” For me, it was “get into a good college.” Now it seems to be “Ivy League or bust”, with anything short of that being seen as a failure. There is increased pressure to not just get top grades, but to take up sports, foreign languages, coding, musical instruments, etc., and to excel in all of them. This gives teens (and even younger kids in some instances) almost no time to be themselves and engage in unstructured play and socialization.

There needs to be a loosening of pressure and expectations. Sure, parents should expect and insist on best efforts and get on their kids when they are not putting the best effort forth, but they don’t have to be in an activity at every hour of the day, and they need to realize that there are other schools besides Harvard. And in general, parents should just get better at recognizing warning signs of depression and suicide, and intervene when necessary.

kgb999 says:

Re: Re: Don't pretend social media isn't flawed to protect 1stA.

While correlation isn’t causation, correlation is often smoke to causation’s fire. A strong correlation on matters of public concern, particularly ones that objectively result in death, do need to be fully explained before being trivially discounted.

But the entire line of discussion is a complete red herring.

Even if there is a demonstrated causal link between social media and negative mental health impacts, even teen suicide, that still wouldn’t justify solutions that trample the 1st Amendment.

There are countless ways to address negative issues related to social media – particularly related to how they interact with minors. The strong preponderance of negative outcomes correlated with certain aspects of certain platforms probably justifies society exploring weather a policy response is warranted.

IMO, the problem isn’t “trying to protect the children” per-se. I think all of the major platforms need constant pressure in that area. Hell, even with most of them under active consent decrees, they all of keep getting caught doing shady crap with kids’ data. The problem is the asinine constitution-breaking ways policy makers keep trying to address the related issues.

Aren’t there any lobbyists out there pushing non-stupid solutions?

Anonymous Coward says:

Re:

You’re over-generalizing. In fact, who gave you status to ascribe the doings of an infinitesimal set of the population to the overwhelming majority of it?

I live in a neighbor hood absolutely full of teenagers, the place is overrun with them! And yet, none of them are even close to thinking of getting out while the getting’s good. Not even their friends and/or schoolmates, etc. Near as I can tell, of the 7 school districts in my county, there’s been only one suicide in the past 8 years… and that was ruled as a despondency over becoming an orphan at 13 years of age.

So no, I strongly dispute that there is a some kind of ‘teen mental health crisis’. But for the life of me, I don’t see a way to make people wake up and start questioning all these so-called ‘studies’ and ‘experts agree’ statements. That’s attempting to drag the thinking portion of the population down the level of the base-emotion-gut-reaction people.

And that’s sad, very sad.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Plenty of options that aren't 'blame social media'

If you’re actually asking I can think of a few things offhand…

-Get parents more involved with their kids by having them make clear that they are interested in their lives and can be trusted to listen and support them if they have questions and concerns about something they ran across or something they’ve been worrying about.

-Work to destigmatize mental health issues via programs aimed at getting people to understand that there’s nothing wrong with admitting that you might be having problems and asking for help any more than it’s wrong to admit you have a broken arm and going to the doctor for it.

-Paired with the above institute governmental financial support so that more people can afford to see a medical and/or mental health professional as it doesn’t matter if there’s a hundred doctors in your town if it would bankrupt you to see any of them and easier access to treatment would go a long way to lowering stress and addressing problems that might be messing someone’s life up either physically or mentally(or both really since those two are so intertwined).

Those were just three from the top of my head, I’m sure with more time it would be trivial to come up with plenty of other ways to actually help kids.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

The whole point of the law is that they are publishers, but of 3rd party content, not 1st party content.

The term “publisher”, for it to be meaningful, implies first party. The third party publisher designation relegates platforms to a mere middleman host, and not an actual publisher. I think we can agree that tech companies are NOT first party publishers of user content, although you underestimate the significance of this distinction.

Anonymous Coward says:

This is really funny and enlightening, especially considering the law this heating is about.

“So, we can only watch Bugs Bunny? “

So, ancient cartoon violence and adult jokes were long ago normalized as something for children. (Yes, in far later years, later iterations became more child-orientated, but usually with plenty of bits still meant for the adults.)

Also, the violence (and other stuff) which indeed was so much more prevalent in the lives of children in the past, which those grown-up children and grandchildren seem to forget, and didn’t hurt them none, is ignored in favor of these people trying to regulate the much less violent lives of children today, while claiming it is more violent.

Good job, jackasses.

Anonymous Coward says:

Re:

Cartoons were shown prior to the movie in theaters and drive-ins, they were not made just for kids on Saturday morning and they were no more violent than those that replaced them. For example, Teen Aged Mutant Ninja Turtles was violent as was Power Rangers. I think they were replaced for other reasons, maybe some white nationalist did not like when Nazis were made fun of or an uber riche took offense to some depiction of industry .. idk but the prevailing excuse was the so called violence.

Leave a Reply to Ariella Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...