Court Says California’s Age Appropriate Design Code Is Unconstitutional (Just As We Warned)

from the the-1st-amendment-still-matters dept

Some good news! Federal Judge Beth Labson Freeman has recognized what some of us have been screaming about for over a year now: California’s Age Appropriate Design Code (AB 2273) is an unconstitutional mess that infringes on the 1st Amendment. We can add this to the pile of terrible moral panic “protect the children!” laws in Texas and Arkansas that have been similarly rejected (once again showing that the moral panic issue about the internet and children, combined with an ignorance of the 1st Amendment is neither a right nor a left issue — it’s both).

The Age Appropriate Design Code in California got almost no media attention while it was being debated or even after it passed. At times it felt like Professor Eric Goldman and myself were the only ones highlighting the problems with the bill. And there are many, many problems. Including problems that both Goldman and I told the court about (and both of us were cited in the decision).

For what it’s worth, I’ve heard through the grapevine, that one of the reasons why there was basically no media coverage was that many of the large tech companies are actually fine with the AADC, because they know that they already do most of what the law requires… and they also know full well that smaller companies will get slammed by the law’s requirements, so that’s kind of a bonus for the big tech companies.

As a reminder, the AADC was “sponsored” (in California outside organizations can sponsor bills) by an organization created and run by a British Baroness who is one of the loudest moral panic spreaders about “the kids on the internet.” Baroness Beeban Kidron has said that it’s her life’s mission to pass these kinds of laws around the world (she already helped get a similarly named bill passed in the UK, and is a driving force behind the dangerous Online Safety Act there as well). The other major sponsor of the AADC is… Common Sense Media, whose nonsense we just called out on another bill. Neither of them understand how the 1st Amendment works.

Thankfully, the judge DOES understand how the 1st Amendment works. As I noted a month and half after attending the oral arguments in person, the judge really seemed to get it. And that comes through in the opinion, which grants the preliminary injunctions, blocking the law from going into effect as likely unconstitutional under the 1st Amendment.

The judge notes, as was mentioned in the courtroom, that she’s “mindful” of the fact that the law was passed unanimously, but that doesn’t change the fact that it appears to violate the 1st Amendment. She says that protecting the privacy of people online is obviously a valid concern of the government, but that doesn’t mean you get to ignore the 1st Amendment in crafting a law to deal with it.

California insisted that nothing in the AADC regulated expression, only conduct. But, as the judge had called out at the hearing, it’s quite obvious that’s not true. And thus she finds that the law clearly regulates protected expression:

The State argues that the CAADCA’s regulation of “collection and use of children’s personal information” is akin to laws that courts have upheld as regulating economic activity, business practices, or other conduct without a significant expressive element. Opp’n 11– 12 (citations omitted). There are two problems with the State’s argument. First, none of the decisions cited by the State for this proposition involved laws that, like the CAADCA, restricted the collection and sharing of information. See id.; Rumsfeld v. Forum for Acad. & Inst. Rights, Inc., 547 U.S. 47, 66 (2006) (statute denying federal funding to educational institutions restricting military recruiting did not regulate “inherently expressive” conduct because expressive nature of act of preventing military recruitment necessitated explanatory speech); Roulette v. City of Seattle, 97 F.3d 300, 305 (9th Cir. 1996) (ordinance prohibiting sitting or lying on sidewalk did not regulate “forms of conduct integral to, or commonly associated with, expression”); Int’l Franchise, 803 F.3d at 397–98, 408 (minimum wage increase ordinance classifying franchisees as large employers “exhibit[ed] nothing that even the most vivid imagination might deem uniquely expressive”) (citation omitted); HomeAway.com, 918 F.3d at 680, 685 (ordinance regulating forms of short-term rentals was “plainly a housing and rental regulation” that “regulate[d] nonexpressive conduct—namely, booking transactions”); Am. Soc’y of Journalists & Authors, 15 F.4th at 961–62 (law governing classification of workers as employees or independent contractors “regulate[d] economic activity rather than speech”).

Second, in a decision evaluating a Vermont law restricting the sale, disclosure, and use of information about the prescribing practices of individual doctors—which pharmaceutical manufacturers used to better target their drug promotions to doctors—the Supreme Court held the law to be an unconstitutional regulation of speech, rather than conduct. Sorrell, 564 U.S. at 557, 562, 570–71. The Supreme Court noted that it had previously held the “creation and dissemination of information are speech within the meaning of the First Amendment,” 564 U.S. at 570 (citing Bartnicki v. Vopper, 532 U.S. 514, 527 (2001); Rubin v. Coors Brewing Co., 514 U.S. 476, 481 (1995); Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc., 472 U.S. 749, 759 (1985) (plurality opinion)), and further held that even if the prescriber information at issue was a commodity, rather than speech, the law’s “content- and speaker-based restrictions on the availability and use of . . . identifying information” constituted a regulation of speech, id. at 570– 71; see also id. at 568 (“An individual’s right to speak is implicated when information he or she possesses is subject to ‘restraints on the way in which the information might be used’ or disseminated.”) (quoting Seattle Times Co. v. Rhinehart, 467 U.S. 20, 32 (1984)).

While California argued that Sorrell didn’t apply here because it was a different kind of information, the court notes that this argument makes no sense.

… the State is correct that Sorrell does not address any general right to collect data from individuals. In fact, the Supreme Court noted that the “capacity of technology to find and publish personal information . . . presents serious and unresolved issues with respect to personal privacy and the dignity it seeks to secure.” Sorrell, 564 U.S. at 579–80. But whether there is a general right to collect data is independent from the question of whether a law restricting the collection and sale of data regulates conduct or speech. Under Sorrell, the unequivocal answer to the latter question is that a law that—like the CAADCA—restricts the “availability and use” of information by some speakers but not others, and for some purposes but not others, is a regulation of protected expression.

And, thus, the court concludes that the restrictions in the AADC on collecting, selling, sharing, or retaining any personal information regulates speech (as a separate note, I’m curious what this also means for California’s privacy laws, on which the AADC is built… but we’ll leave that aside for now).

Separate from the restrictions on information collection, the AADC also has a bunch of mandates. Those also regulate speech:

The State contended at oral argument that the DPIA report requirement merely “requires businesses to consider how the product’s use design features, like nudging to keep a child engaged to extend the time the child is using the product” might harm children, and that the consideration of such features “has nothing to do with speech.” Tr. 19:14–20:5; see also id. at 23:5–6 (“[T]his is only assessing how your business models . . . might harm children.”). The Court is not persuaded by the State’s argument because “assessing how [a] business model[] . . . might harm children” facially requires a business to express its ideas and analysis about likely harm. It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

And she notes that the AADC pushes companies to create content moderation rules that favor the state’s moderation desires, which clearly is a 1st Amendment issue:

The CAADCA also requires a covered business to enforce its “published terms, policies, and community standards”—i.e., its content moderation policies. CAADCA § 31(a)(9). Although the State argues that the policy enforcement provision does not regulate speech because businesses are free to create their own policies, it appears to the Court that NetChoice’s position that the State has no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment by proxy, is better grounded in the relevant binding and persuasive precedent. See Mot. 11; Playboy Ent. Grp., 529 U.S. at 806 (finding statute requiring cable television operators providing channels with content deemed inappropriate for children to take measures to prevent children from viewing content was unconstitutional regulation of speech); NetChoice, LLC v. Att’y Gen., Fla. (“NetChoice v. Fla.”), 34 F.4th 1196, 1213 (11th Cir. 2022) (“When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment-protected activity.”); Engdahl v. City of Kenosha, 317 F. Supp. 1133, 1135–36 (E.D. Wis. 1970) (holding ordinance restricting minors from viewing certain movies based on ratings provided by Motion Picture Association of America impermissibly regulated speech).

Then there’s the “age estimation” part of the bill. Similar to the cases in Arkansas and Texas around age verification, this court also recognizes the concerns, including that such a mandate will likely hinder adult access to content as well:

The State argues that “[r]equiring businesses to protect children’s privacy and data implicates neither protected speech nor expressive conduct,” and notes that the provisions “say[] nothing about content and do[] not require businesses to block any content for users of any age.” Opp’n 15. However, the materials before the Court indicate that the steps a business would need to take to sufficiently estimate the age of child users would likely prevent both children and adults from accessing certain content. See Amicus Curiae Br. of Prof. Eric Goldman (“Goldman Am. Br.”) 4–7 (explaining that age assurance methods create time delays and other barriers to entry that studies show cause users to navigate away from pages), ECF 34-1; Amicus Curiae Br. of New York Times Co. & Student Press Law Ctr. (“NYT Am. Br.”) 6 (stating age-based regulations would “almost certain[ly] [cause] news organizations and others [to] take steps to prevent those under the age of 18 from accessing online news content, features, or services”), ECF 56-1. The age estimation and privacy provisions thus appear likely to impede the “availability and use” of information and accordingly to regulate speech.

Again, the court admits that protecting kids is obviously a laudable goal, but you don’t do it by regulating speech. And the fact that California exempted non-profits from the law suggests targeting only some speakers, a big 1st Amendment no-no.

The Court is keenly aware of the myriad harms that may befall children on the internet, and it does not seek to undermine the government’s efforts to resolve internet-based “issues with respect to personal privacy and . . . dignity.” See Sorrell, 564 U.S. at 579; Def.’s Suppl. Br. 1 (“[T]he ‘serious and unresolved issues’ raised by increased data collection capacity due to technological advances remained largely unaddressed [in Sorrell].”). However, the Court is troubled by the CAADCA’s clear targeting of certain speakers—i.e., a segment of for-profit entities, but not governmental or non-profit entities—that the Act would prevent from collecting and using the information at issue. As the Supreme Court noted in Sorrell, the State’s arguments about the broad protections engendered by a challenged law are weakened by the law’s application to a narrow set of speakers. See Sorrell, 564 U.S. at 580 (“Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers”).

Of course, once you establish that protected speech is being regulated, that’s not the end of the discussion. There are situations in which the government is allowed to regulate speech, but only if certain levels of scrutiny are met. During the oral arguments, a decent portion of the time was spent debating whether or not the AADC should have to pass strict scrutiny or just intermediate scrutiny. Strict scrutiny requires there to be both a compelling state interest in the law and that the law is narrowly tailored to achieve that result. Intermediate scrutiny says it must just be an “important government objective” (slightly less than compelling) and rather than “narrowly tailored” the law has to be substantially related to achieving that important government objective.

While I think it seemed clear that strict scrutiny should apply, here the court went with a form of intermediate scrutiny (“commercial scrutiny”) not necessarily because the judge thinks it’s the right level, but because if the law is unconstitutional even at intermediate scrutiny, then it wouldn’t survive strict scrutiny anyway. And thankfully, the AADC doesn’t even survive the lower level of scrutiny.

The court finds (as expected) that the state has a substantial interest in protecting children, but is not at all persuaded that the AADC does anything to further that interest, basically, because the law was terribly drafted. (They leave out that it had to be terribly drafted, because the intent of the bill was to pressure websites to moderate the way the state wanted, but they couldn’t come out and say that so they had to pretend that it was just about “data management.”):

Accepting the State’s statement of the harm it seeks to cure, the Court concludes that the State has not met its burden to demonstrate that the DPIA provisions in fact address the identified harm. For example, the Act does not require covered businesses to assess the potential harm of product designs—which Dr. Radesky asserts cause the harm at issue—but rather of “the risks of material detriment to children that arise from the data management practices of the business.” CAADCA § 31(a)(1)(B) (emphasis added). And more importantly, although the CAADCA requires businesses to “create a timed plan to mitigate or eliminate the risk before the online service, product, or feature is accessed by children,” id. § 31(a)(2), there is no actual requirement to adhere to such a plan. See generally id. § 31(a)(1)-(4); see also Tr. 26:9–10 (“As long as you write the plan, there is no way to be in violation.”),

Basically, California tried to tap dance around the issues, knowing it couldn’t come out and say that it was trying to regulate content moderation on websites, so it claims that it’s simply regulating “data management practices,” but the harms that the state’s own expert detailed (which drive the state’s substantial interest in passing the law) are all about the content on websites. So, then, by admitting that the law doesn’t directly require moderation (which would be clearly unconstitutional, but would address the harms described), the state effectively admitted that the AADC does not actually address the stated issue.

Because the DPIA report provisions do not require businesses to assess the potential harm of the design of digital products, services, and features, and also do not require actual mitigation of any identified risks, the State has not shown that these provisions will “in fact alleviate [the identified harms] to a material degree.” Id. The Court accordingly finds that NetChoice is likely to succeed in showing that the DPIA report provisions provide “only ineffective or remote support for the government’s purpose” and do not “directly advance” the government’s substantial interest in promoting a proactive approach to the design of digital products, services, and feature. Id. (citations omitted). NetChoice is therefore likely to succeed in showing that the DPIA report requirement does not satisfy commercial speech scrutiny.

So California got way to clever in writing the AADC and trying to wink wink nod nod its way around the 1st Amendment. By not coming out and saying the law requires moderation, it’s admitting that the law doesn’t actually address the problems it claims it’s addressing.

Ditto for the “age estimation” requirement. The issue here was that California tried to tap dance around the age estimation requirement by saying it wasn’t a requirement. It’s just that if you didn’t do age estimation, then you have to treat ALL users as if they’re children. Again, this attempt at being clever backfires by making it clear that the law would restrict access to content for adults:

Putting aside for the moment the issue of whether the government may shield children from such content—and the Court does not question that the content is in fact harmful—the Court here focuses on the logical conclusion that data and privacy protections intended to shield children from harmful content, if applied to adults, will also shield adults from that same content. That is, if a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly “reduce the adult population … to reading only what is fit for children.” Butler v. Michigan, 352 U.S. 380, 381, 383 (1957). And because such an effect would likely be, at the very least, a “substantially excessive” means of achieving greater data and privacy protections for children, see Hunt, 638 F.3d at 717 (citation omitted), NetChoice is likely to succeed in showing that the provision’s clause applying the same process to all users fails commercial speech scrutiny.

Similarly, regarding the requirement for higher levels of privacy protection, the court cites the NY TImes’ amicus brief, basically saying that this law will make many sites restrict content only to those over 18:

NetChoice has provided evidence that uncertainties as to the nature of the compliance required by the CAADCA is likely to cause at least some covered businesses to prohibit children from accessing their services and products altogether. See, e.g., NYT Am. Br. 5–6 (asserting CAADCA requirements that covered businesses consider various potential harms to children would make it “almost certain that news organizations and others will take steps to prevent those under the age of 18 from accessing online news content, features, or services”). Although the State need not show that the Act “employs . . . the least restrictive means” of advancing the substantial interest, the Court finds it likely, based on the evidence provided by NetChoice and the lack of clarity in the provision, that the provision here would serve to chill a “substantially excessive” amount of protected speech to the extent that content providers wish to reach children but choose not to in order to avoid running afoul of the CAADCA

Again and again, for each provision in the AADC, the court finds that the law can’t survive this intermediate level of scrutiny, as each part of the law seems designed to pretend to do one thing while really intending to do another, and therefore it is clearly not well targeted (nor can it be, since accurately targeting it would only make the 1st Amendment concerns more direct).

For example, take the provision that bars a website from using the personal info of a child in a way that is “materially detrimental to the physical health, mental health, or well-being of a child.” As we pointed out while the bill was being debated, this is ridiculously broad, and could conceivably cover information that a teenager finds upsetting. But that can’t be the law. And the court notes the lack of specificity here, especially given that children at different ages will react to content very differently:

The CAADCA does not define what uses of information may be considered “materially detrimental” to a child’s well-being, and it defines a “child” as a consumer under 18 years of age. See CAADCA § 30. Although there may be some uses of personal information that are objectively detrimental to children of any age, the CAADCA appears generally to contemplate a sliding scale of potential harms to children as they age. See, e.g., Def.’s Suppl. Br. 3, 4 (describing Act’s requirements for “age-appropriate” protections). But as the Third Circuit explained, requiring covered businesses to determine what is materially harmful to an “infant, a five-year old, or a person just shy of age seventeen” is not narrowly tailored.

So, again, by trying to be clever and not detailing the levels by which something can be deemed “age appropriate,” the “age appropriate design code,” fails the 1st Amendment test.

There is also an important discussion about some of the AADC requirements that would likely pressure sites to remove content that would be beneficial to “vulnerable” children:

NetChoice has provided evidence indicating that profiling and subsequent targeted content can be beneficial to minors, particularly those in vulnerable populations. For example, LGBTQ+ youth—especially those in more hostile environments who turn to the internet for community and information—may have a more difficult time finding resources regarding their personal health, gender identity, and sexual orientation. See Amicus Curiae Br. of Chamber of Progress, IP Justice, & LGBT Tech Inst. (“LGBT Tech Am. Br.”), ECF 42-1, at 12–13. Pregnant teenagers are another group of children who may benefit greatly from access to reproductive health information. Id. at 14–15. Even aside from these more vulnerable groups, the internet may provide children— like any other consumer—with information that may lead to fulfilling new interests that the consumer may not have otherwise thought to search out. The provision at issue appears likely to discard these beneficial aspects of targeted information along with harmful content such as smoking, gambling, alcohol, or extreme weight loss.

The court points out the sheer inanity of California’s defense on this point, which suggests that there’s some magical way to know how to leave available just the beneficial stuff:

The State argues that the provision is narrowly tailored to “prohibit[] profiling by default when done solely for the benefit of businesses, but allows it . . . when in the best interest of children.” Def.’s Suppl. Br. 6. But as amici point out, what is “in the best interest of children” is not an objective standard but rather a contentious topic of political debate. See LGBT Tech Am. Br. 11–14. The State further argues that children can still access any content online, such as by “actively telling a business what they want to see in a recommendations profile – e.g., nature, dance videos, LGBTQ+ supportive content, body positivity content, racial justice content, etc.” Radesky Decl. ¶ 89(b). By making this assertion, the State acknowledges that there are wanted or beneficial profile interests, but that the Act, rather than prohibiting only certain targeted information deemed harmful (which would also face First Amendment concerns), seeks to prohibit likely beneficial profiling as well. NetChoice’s evidence, which indicates that the provision would likely prevent the dissemination of a broad array of content beyond that which is targeted by the statute, defeats the State’s showing on tailoring, and the Court accordingly finds that State has not met its burden of establishing that the profiling provision directly advances the State’s interest in protecting children’s well-being. NetChoice is therefore likely to succeed in showing that the provision does not satisfy commercial speech scrutiny

This same issue comes up in the prohibition on “dark patterns,” which are not explained clearly and again run into the issue of how a site is supposed to magically know what is “materially detrimental.”

The last of the three prohibitions of CAADCA § 31(b)(7) concerns the use of dark patterns to “take any action that the business knows, or has reason to know, is materially detrimental” to a child’s well-being. The State here argues that dark patterns cause harm to children’s well-being, such as when a child recovering from an eating disorder “must both contend with dark patterns that make it difficult to unsubscribe from such content and attempt to reconfigure their data settings in the hope of preventing unsolicited content of the same nature.” Def.’s Suppl. Br. 7; see also Amicus Curiae Br. of Fairplay & Public Health Advocacy Inst. (“Fairplay Am. Br.”) 4 (noting that CAADCA “seeks to shift the paradigm for protecting children online,” including by “ensuring that children are protected from manipulative design (dark patterns), adult content, or other potentially harmful design features.”) (citation omitted), ECF 53-1. The Court is troubled by the “has reason to know” language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being. See supra, at Part III(A)(1)(a)(iv)(7). And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content. See NYT Am. Br. 5–6. Given the other infirmities of the provision, the Court declines to wordsmith it and excise various clauses, and accordingly finds that NetChoice is likely to succeed in showing that the provision as a whole fails commercial speech scrutiny.

Given the 1st Amendment problems with the law, the court doesn’t even bother with the argument about the Dormant Commerce Clause being violated by the AADC, saying it doesn’t need to go there, and also highlighting that it’s a “thorny constitutional issue” that is in flux due to a very recent Supreme Court decision. While the judge doesn’t go into much detail on the argument that existing federal laws COPPA and Section 230 preempt California’s laws, she does say she doesn’t think that argument alone would be strong enough to get a preliminary injunction, saying the question of preemption would depend on what policies were impacted (basically saying that it might be preempted but we can’t tell until someone tries to enforce the law).

I fully expect the state to appeal and the issue will go up to the 9th Circuit. Hopefully they see the problems as clearly as the judge here did.

Filed Under: , , , , , , , , ,
Companies: netchoice

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Court Says California’s Age Appropriate Design Code Is Unconstitutional (Just As We Warned)”

Subscribe: RSS Leave a comment
13 Comments
This comment has been deemed insightful by the community.
Mr. Blond says:

A preliminary injunction is not a ruling on the merits. They can’t appeal until the final decision is issued (though I don’t see the judge changing course after a PI).

Not sure if it’s a “win” to apply lower scrutiny. On the one hand, it shows that the law wouldn’t survive even the most basic level. On the other, it could get courts in the habit of applying lower levels of scrutiny just because one side claims something is commercial speech. Is the level of scrutiny something that could come up on appeal?

Finally, this got me thinking: if a law like this couldn’t even pass commercial speech scrutiny, it’s possible that COPPA itself is on shaky ground. After reading the decision, the judge said on several occasions that the state has not identified a concrete harm that the law would effectively prevent. COPPA is based on the fact that data collection is harmful because, advertising (based on the assumption that all targeted advertising is harmful regardless of the product or service being advertised). It seems a hair’s breadth away from what just got struck down here.

Anonymous Coward says:

Re:

  1. In theory, any decision by a judge could be appealed. So while the injunction may be preliminary, the judge’s decision in granting it can be appealed.
  2. Applying lower scrutiny is an artifact of this being a preliminary ruling. The judge is not saying that commercial scrutiny should apply (that will be decided at the real trial), but is pointing out that even if it did, the law would still fail. Basically the judge is using this to state that the law is unconstitutional no matter what.
Darkness Of Course (profile) says:

This sounds like ISO 9000

Writing the plan to cure the problems protects you from the negative aspects of the law. There is no need to do any of those actions, as that is not required.

I was involved in an ISO 9000 program in a software organization. It was interesting. However there are several layers of ISO 9000 compliance.

9000 is simplest. List the problems in the process.
You don’t have to fix anything. Just say, this is broken.
9001 increases it a tiny bit, but not by much.

Management was insistent so we continued onward. Until we informed management the next step was only management changes to prevent problems, and to ensure that all the documented issues would be addressed.

End of project.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...