We keep seeing it show up in a variety of places: laws to “protect the children” that, fundamentally begin with age verification to figure out who is a child (and then layering in a ton of often questionable requirements for how to deal with those identified as children). We have the Online Safety Bill in the UK. We have California’s Age Appropriate Design Code, which a bunch of states are rushing to emulate in their own legislatures. In Congress, there is the Kids Online Safety Act.
All of these, in the name of “protecting the children,” include elements that effectively require sites to use age verification technology. We’ve already spent many, many words explaining how age verification technology is inherently dangerous and actually puts children at greater risk. Not to mention it’s a privacy nightmare that normalizes the idea of mass surveillance, especially for children.
Now, there are many things that I disagree with CNIL about, especially its views that the censorial “right to be forgotten in the EU” should be applied globally. But one thing we likely agree on is that CNIL does not fuck around when it comes to data protection stuff. CNIL is generally seen as the most aggressive and most thorough in its data protection/data privacy work. Being on the wrong side of CNIL is a dangerous place for any company to be.
So I’d take it seriously when CNIL effectively notes that all age verification is a privacy nightmare, especially for children:
The CNIL has analysed several existing solutions for online age verification, checking whether they have the following properties: sufficiently reliable verification, complete coverage of the population and respect for the protection of individuals’ data and privacy and their security.
The CNIL finds that there is currently no solution that satisfactorily meets these three requirements.
Basically, CNIL found that all existing age verification techniques are unreliable, easily bypassed, and are horrible regarding privacy.
Despite this, CNIL seems oddly optimistic that just by nerding harder, perhaps future solutions will magically work. However, it does go through the weaknesses and problems of the various offerings being pushed today as solutions. For example, you may recall that when I called out the dangers of the age verification in California’s Age Appropriate Design Code, a trade group representing age verification companies reached out to me to let me know there was nothing to worry about, because they’d just scan everyone’s faces to visit websites. CNIL points out some, um, issues with this:
The use of such systems, because of their intrusive aspect (access to the camera on the user’s device during an initial enrolment with a third party, or a one-off verification by the same third party, which may be the source of blackmail via the webcam when accessing a pornographic site is requested), as well as because of the margin of error inherent in any statistical evaluation, should imperatively be conditional upon compliance with operating, reliability and performance standards. Such requirements should be independently verified.
This type of method must also be implemented by a trusted third party respecting precise specifications, particularly concerning access to pornographic sites. Thus, an age estimate performed locally on the user’s terminal should be preferred in order to minimise the risk of data leakage. In the absence of such a framework, this method should not be deployed.
Every other verification technique seems to similarly raise questions about effectiveness and how protective (or, well, how not protective it is of privacy rights).
So… why isn’t this raising alarm bells among the various legislatures and children’s advocates (many of whom also claim to be privacy advocates) who are pushing for these laws?
It really does feel like the legislative process regarding the tech world and privacy is a complete mess. While politicians are right that it would be good if we got a comprehensive privacy bill in place, they seem to have no idea what that even means. Actually, it seems like they don’t even know what privacy means. And thus, the mess just continues. California tried to leap ahead into the unknown by putting together a truly ridiculous bill (CCPA) that no one has even figured out yet, despite it having passed years ago. And, without even bothering to understand any of it, California has pushed ahead again with the California’s Age Appropriate Design Code law, which somehow intersects with the CCPA, but again, no one’s quite sure how or why.
And now, people are pointing out that the kid’s code is actually messing up plans for a federal privacy law. Even before the law was signed by Governor Newsom, House Speaker Nancy Pelosi announced that she was putting the brakes on the only federal privacy law with any traction (not that it was good…) because it might upset Californian politicians. The concern: federal law might pre-empt California’s laws:
“However, Governor Newsom, the California Privacy Protection Agency and top state leaders have pointed out the American Data Privacy and Protection Act does not guarantee the same essential consumer protections as California’s existing privacy laws. Proudly, California leads the nation not only in innovation, but also in consumer protection. With so much innovation happening in our state, it is imperative that California continues offering and enforcing the nation’s strongest privacy rights. California’s landmark privacy laws and the new kids age-appropriate design bill, both of which received unanimous and bipartisan support in both chambers, must continue to protect Californians — and states must be allowed to address rapid changes in technology.“
The concern is that the federal law would basically wipe out state laws. I know that some people are concerned about this, but a federal law really needs to do exactly that. First off, whatever you think of California’s attempts at privacy laws, there are all those other states out there as well. And we’re already seeing how states like Florida and Texas have been passing dangerous content moderation bills that are more designed to spite internet companies than actually protect users.
How soon do you think they’re going to do the same with privacy laws as well?
Second, it’s basically impossible for smaller companies to comply with even California’s weird law. How are we going to comply with 50 separate state laws, each with their own variations and quirks and problems (and, likely, contradictions). A federal law that pre-empts state laws sets a single standard across the country. As bad as the EU’s Digital Services Act and Digital Markets Act may turn out to be, at the very least, they’re trying to harmonize the laws across the EU.
The US, which should be more harmonized than the EU in general, seems to be going in the other direction.
Yes, sure, California feels the need to do stuff because no one in DC can get their act together to pass a reasonable federal privacy law. But that doesn’t mean that we should just let any state do whatever it wants (or what UK aristocrats want).
The fact that this awful California law is now being used as an excuse to hold up any effort on a federal privacy law seems like a really, really silly excuse. And, to be clear, it almost certainly is an excuse, because Pelosi and others in Congress know that they’re currently unable to pass any actually serious privacy law, so claiming that it will somehow “block” terrible California laws is seen as a way to hide their own failings.
But, at the very least, it seems to suggest that maybe California should stop rushing through so many half-baked laws.
When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because… kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. Despite its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s highly experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.
In no particular order, here are five radical policy ideas baked into the AADC:
Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.
The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.
While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.
Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.
Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.
Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.
The problems with this approach should be immediately apparent. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year-olds and 2 year-olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.
Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.
(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).
Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:
The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.
After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.
A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?
I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?
This isn’t a surprise, but it’s still frustrating. Gavin Newsom, who wants to be President some day, and thus couldn’t risk misleading headlines that he didn’t “protect the children,” has now signed AB 2273 into law (this follows on yesterday’s decision to sign the bad, but slightly less destructive, AB 587 into law). At this point there’s not much more I can say about why AB 2273 is so bad. I’ve explained why it’s literally impossible to comply with (and why many sites will just ignore it). I’ve explained how it’s pretty clearly unconstitutional. I’ve explained how the whole idea was pushed for and literally sponsored by a Hollywood director / British baroness who wants to destroy the internet. I’ve explained how it won’t do much, if anything, to protect children, but will likely put them at much greater risk. I’ve explained how the company it will likely benefit most is the world’s largest porn company— not to mention COVID disinfo peddlers and privacy lawyers. I’ve explained how the companies supporting the law insist that we shouldn’t worry because websites will just start scanning your face when you visit.
None of that matters, though.
Because, in this nonsense political climate where moral panics and culture wars are all that matter in politics, politicians are going to back laws that claim to “protect the children,” no matter how much of a lie that is.
Newsom, ever the politician, did the political thing here. He gets his headlines pretending he’s protecting kids.
“We’re taking aggressive action in California to protect the health and wellbeing of our kids,” said Governor Newsom. “As a father of four, I’m familiar with the real issues our children are experiencing online, and I’m thankful to Assemblymembers Wicks and Cunningham and the tech industry for pushing these protections and putting the wellbeing of our kids first.”
The press release includes a quote from Newsom’s wife, who is also a Hollywood documentary filmmaker, similar to the baroness.
“As a parent, I am terrified of the effects technology addiction and saturation are having on our children and their mental health. While social media and the internet are integral to the way we as a global community connect and communicate, our children still deserve real safeguards like AB 2273 to protect their wellbeing as they grow and develop,” said First Partner Jennifer Siebel Newsom. “I am so appreciative of the Governor, Assemblymember Cunningham, and Assemblymember Wicks’ leadership and partnership to ensure tech companies are held accountable for the online spaces they design and the way those spaces affect California’s children.”
Except that the bill does not create “real safeguards” for children. It creates a massive amount of busywork to try to force companies to dumb down the internet, while also forcing intrusive age verification technologies on tons of websites.
It puts tremendous power in the hands of the Attorney General.
The bill doesn’t go into effect until the middle of 2024 and I would assume that someone will go to court to challenge it, meaning that what this bill is going to accomplish in the short run is California wasting a ton of taxpayer dollars (just as Texas and Florida did) to try to pretend they have the power to tell companies how to design their products.
It’s all nonsense grandstanding and Governor Newsom knows it, because I know that people have explained all this to him. But getting the headlines is more important than doing the right thing.
It’s often kind of amazing at how much moral panics by adults treat kids as if they’re completely stupid, and unable to do anything themselves. It’s a common theme in all sorts of moral panics, where adults insist that because some bad things could happen, they must be prevented entirely — without ever considering that maybe a large percentage of kids are capable enough to deal with the risks and dangers themselves.
The Boston Globe recently had an interesting article about how a group of middle school boys were able to use Discord to successfully track the creepy, disgusting, and inappropriate shit one of their teachers/coaches did towards their female classmates, and how that data is now being used in an investigation of the teacher, who has been put on leave.
In an exclusive interview with The Boston Globe, one of the boys described how in January 2021,he and his friends decided to start their “Pedo Database,” to track the teacher’s words and actions.
There’s even a (redacted) screenshot of the start of the channel.
The kids self-organized and used Discord as a useful tool for tracking the problematic interactions.
During COVID, as they attended class online, they’d open the Discord channel on a split-screen and document the teacher’s comments in real time:
“You all love me so choose love.”
“You gotta stand up and dance now.”
Everyone “in bathing suits tomorrow.”
Once they were back in class in person, the boys jotted down notes to add to the channel later: Flirting with one girl. Teasing another. Calling the girls “sweetheart” and “sunshine.” Asking one girl to take off her shoes and try wiggling her toes without moving her pinkies.
“I felt bad for [the girls] because sometimes it just seems like it was a humiliating thing,” the boy told the Globe. “He’d play a song and he’d make one of them get up and dance.”
When the school year ended, the boys told incoming students about the Discord channel and encouraged them to keep tabs on the teacher. All in all, eight boys were involved, he said.
Eventually, the teacher was removed from the school and put on leave, after the administration began an investigation following claims that “the teacher had stalked a pre-teen girl at the middle school while he was her coach, and had been inappropriate with other girls.”
The article notes that there had been multiple claims in the past against the teacher, but that other teachers and administrators long protected the teacher. Indeed, apparently the teacher bragged about how he’d survived such complaints for decades. And that’s when the kids stepped up and realized they needed to start doing something themselves.
“I don’t think there was a single adult who would ever — like their parents, my mom, like anybody in the school — who had ever really taken the whole thing seriously before,” he added.
The boy’s mother contacted Conlon, and now the “Pedo Database” is in the hands of the US attorney’s Office, the state Department of Children, Youth, and Families, the state Department of Education, and with lawyer Matthew Oliverio, who is conducting the school’s internal investigation.
“I did not ever think this would actually be used as evidence, but we always had it as if it was,” said the boy, who is now 15 and a student at North Kingstown High School. “So I’m glad that we did, even though it might have seemed like slightly stupid at times.”
So, here we have kids who used the internet to keep track of a teacher accused of preying on children. Seems like a good example of helping to protect children.
Yet, it seems worth noting that under various “protect the children” laws, this kind of activity would likely be blocked. Already, under COPPA, it’s questionable if the kids should even be allowed on Discord. Discord, like many websites, limits usage in its terms of service to those 13 years or older. That’s likely in an attempt to comply with COPPA. But, the article notes that the kids started keeping this database as 6th graders, when they were likely 11-years old.
Also, under California’s AB 2273, Discord likely would have been more aggressive in banning them, as it would have had to employ much more stringent age verification tools that likely would have barred them from the service entirely. Also, given the other requirements of the “Age Appropriate Design Code,” it seems likely that Discord would be doing things like barring a chat channel described as a “pedo database.” A bunch of kids discussing possible pedophilia? Clearly that should be blocked as potentially harmful.
So, once again, the law, rather than protecting kids, might have actually put them more at risk, and done more to actually protect adults who were putting kids’ safety at risk.
Hany Farid is a computer science professor at Berkeley. Here he is insisting that his students should all delete Facebook and YouTube because they often recommend to you things you might like (the horror, the horror):
Farid once did something quite useful, in that he helped Microsoft develop PhotoDNA, a tool that has been used to help websites find and stop child sexual abuse material (CSAM) and report it to NCMEC. Unfortunately, though, he now seems to view much of the world through that lens. A few years back he insisted that we could also tackle terrorism videos with a PhotoDNA — despite the fact that such videos are not at all the same as the CSAM content PhotoDNA can identify, which has strict liability under the law. On the other hand, terrorism videos are often not actually illegal, and can actually provide useful information, including evidence of war crimes.
Anyway, over the years, his views have tended towards what appears to be hating the entire internet because there are some people who use the internet for bad things. He’s become a vocal supporter of the EARN IT Act, despite its many, many problems. Indeed, he’s so committed to it that he appeared at a “Congressional briefing” on EARN IT organized by NCOSE, the group of religious fundamentalist prudes formerly known as “Morality in Media” who believe that all pornography should be illegal because nekked people scare them. NCOSE has been a driving force behind both FOSTA and EARN IT, and they celebrate how FOSTA has made life more difficult for sex workers. At some point, when you’re appearing on behalf of NCOSE, you probably want to examine some of the choices that got you there.
Last week, Farid took to the pages of Gizmodo to accuse me and professor Eric Goldman of “fearmongering” on AB 2273, the California “Age Appropriate Design Code” which he insists is a perfectly fine law that won’t cause any problems at all. California Governor Gavin Newsom is still expected to sign 2273 into law, perhaps sometime this week, even though that would be a huge mistake.
Before I get into some of the many problems with Farid’s article, I’ll just note that both Goldman and I have gone through the bill and explained in great detail the many problems with it, and even highlighted some fairly straightforward ways that the California legislature could have, but chose not to, limit many of its most problematic aspects (though probably not fix them, since the core of the bill makes it unfixable). Farid’s piece does not cite anything in the law (it literally quotes not a single line in the bill) and makes a bunch of blanket statements without much willingness to back them up (and where it does back up the statements, it does so badly). Instead, he accuses Goldman of not substantiating his arguments, which is hilarious.
The article starts off with his “evidence” that the internet is bad for kids.
Leaders have rightly taken notice of the growing mental health crisis among young people. Surgeon General Vivek Murthy has called out social media’s role in the crisis, and, earlier this year, President Biden addressed these concerns in his State of the Union address.
Of course, saying that “there is no longer any question” about the “nature of the harm to children” displays a profound sense of hubris and ignorance. There are in fact many, many questions about the actual harm. As we noted, just recently, there was a big effort to sort through all of the research on the “harms” associated with social media… and it basically came up empty. That’s not to say there’s no harm, because I don’t think anyone believes that. But the actual research and actual data (which Hany apparently doesn’t want to talk about) is incredibly inconclusive.
For each study claiming one thing, there are equally compelling studies claiming the opposite. To claim that “there is no longer any question” is, empirically, false. It is also fearmongering, the very thing Farid accuses me and Prof. Goldman of doing.
Just for fun, let’s look at each of the studies or stories Farid points to in the two paragraphs above, which open the article. The study about “body image issues” that was the centerpiece of the WSJ’s “Facebook Files” reporting left out an awful lot of context. The actual study was, fundamentally, an attempt by Meta to better understand these issues and look for ways to mitigate the negative (which, you know, seems like a good thing, and actually the kind of thing that the AADC would require). But, more importantly, the very survey that is highlighted around body image impact looked at 12 different issues regarding mental health, of which “body image” was just one, and notably it was the only issue out of 12 where teen girls said Instagram made them feel worse, not better (teen boys felt better, not worse, on all 12). The slide was headlined with “but, we make body image issues worse for 1 in 3 teen girls” because that was the only one of the categories where that was true.
And, notably, even as Farid claims that it’s “no longer a question” that Facebook “heightened body image issues,” it also made many of them feel better about body image. And, again, many more felt better on every other issue, including eating, loneliness, anxiety, and family stress. That doesn’t sound quite as damning when you put it that way.
The “TikTok challenges” thing is just stupid, and it’s kind of embarrassing. First of all, it’s been shown that a bunch of the moral panics about “TikTok challenges” have actually been about parents freaking out over challenges that didn’t exist. Even the few cases where someone doing a “TikTok challenge” has come to harm — including the one Farid links to above — involved challenges that kids have done for decades, including before the internet. To magically blame that on the internet is the height of ridiculousness.
I mean, here’s the CDC warning about it in 2008, where they note it goes back to at least 1995 (with some suggestion that it might actually go back decades earlier).
But, yeah, sure, it’s TikTok that’s to blame for it.
The link on the “sexualization of children on YouTube” appears to show the fact that there have been pedophiles trying to game YouTube comments, though a variety of sneaky moves, which is something that YouTube has been trying to fight. But it’s not exactly an example of something that is widespread or mainstream.
As for the last two, fearmongering and moral panics by politicians are kind of standard and hardly proof of anything. Again, the actual data is conflicting and inconclusive. I’m almost surprised that Farid didn’t also toss in claims about suicide, but maybe even he has read the research suggesting you can’t actually blame youth suicide on social media.
So, already we’re off to a bad start, full of questionable fear mongering and moral panic cherry picking of data.
From there, he gives his full-throated support to the Age Appropriate Design Code, and notes that “nine-in-ten California voters” say they support the bill. But, again, that’s meaningless. I’m surprised it’s not 10-in-10. Because if you ask people “do you want the internet to be safe for children” most will say yes. But no one answering this survey actually understands what this bill does.
Then we get to his criticisms of myself and Professor Goldman:
In a piece published by Capitol Weekly on August 18, for example, Eric Goldman incorrectly claims that the AADC will require mandatory age verification on the internet. The following week, Mike Masnick made the bizarre and unsubstantiated claim in TechDirt that facial scans will be required to navigate to any website.
So, let’s deal with his false claim about me first. He says that I made the “bizarre and unsubstantiated claim” that facial scans will be required. But, that’s wrong. As anyone who actually read the article can see quite clearly, it’s what the trade association for age verification providers told me. The quote literally came from the very companies who provide age verification. So, the only “bizarre and unsubstantiated” claims here are from Farid.
As for Goldman’s claims, unlike Farid, Goldman actually supports them with an explanation using the language from the bill. AB 2273 flat out says that “a business that provides an online service, product, or feature likely to be accessed by children shall… estimate the age of child users with a reasonable level of certainty.” I’ve talked to probably a half a dozen actual privacy lawyers about this, and basically all of them say that they would recommend to clients who wish to abide by this that they invest in some sort of age verification technology. Because, otherwise, how would they show that they had achieved the “reasonable level of certainty” required by the law?
Anyone who’s ever paid attention to how lawsuits around these kinds of laws play out knows that this will lead to lawsuits in which the Attorney General of California will insist that websites have not complied unless they’ve implemented age verification technology. That’s because sites like Facebook will implement that, and the courts will note that’s a “best practice” and assume anyone doing less than that fails to abide by the law.
Even should that not happen, the prudent decision by any company will be to invest in such technology to avoid even having to make that argument in court.
Farid insists that sites can do age verification by much less intrusive means, including simple age “estimation.”
Age estimation can be done in a multitude of ways that are not invasive. In fact, businesses have been using age estimation for years – not to keep children safe – but rather for targeted marketing. The AADC will ensure that the age-estimation practices are the least invasive possible, will require that any personal information collected for the purposes of age estimation is not used for any other purpose, and, contrary to Goldman’s claim that age-authentication processes are generally privacy invasive, require that any collected information is deleted after its intended use.
Except, the bill doesn’t just call for “age estimation,” it requires “a reasonable level of certainty” which is not defined in the bill. And getting age estimation for targeted ads wrong means basically nothing to a company. They target an ad wrong, big deal. But under the AADC, a false estimation is now a legal liability. That, by itself, means that many sites will have strong incentives to move to true age verification, which is absolutely invasive.
And, also, not all sites engage in age estimation. Techdirt does not. I don’t want to know how old you are. I don’t care. But under this bill, I might need to.
Also, it’s absolutely hilarious that Farid, who has spent many years trashing all of these companies, insisting that they’re pure evil, that you should delete their apps, and insisting that they have “little incentive” to ever protect their users… thinks they can then be trusted to “delete” the age verification information after it’s been used for its “intended use.”
On that, he’s way more trusting of the tech companies than I would be.
Goldman also claims – without any substantiation – that these regulations will force online businesses to close their doors to children altogether. This argument is, at best, disingenuous, and at worst fear-mongering. The bill comes after negotiations with diverse stakeholders to ensure it is practically feasible and effective. None of the hundreds of California businesses engaged in negotiations are saying they fear having to close their doors. Where companies are not engaging in risky practices, the risks are minimal. The bill also includes a “right to cure” for businesses that are in substantial compliance with its provisions, therefore limiting liability for those seeking in good faith to protect children on their service.
I mean, a bunch of website owners I’ve spoken to over the last month has asked me about whether or not they should close off access to children altogether (or just close off access to Californians), so it’s hardly an idle thought.
Also, the idea that there were “negotiations with diverse stakeholders” appears to be bullshit. Again, I keep talking to website owners who were not contacted, and the few I’ve spoken to who have been in contact with legislators who worked on this bill have told me that the legislators told them, in effect, to pound sand when they pointed out the flaws in the bill.
I mean, Prof. Goldman pointed out tons of flaws in the bill, and it appears that the legislators made zero effort to fix them or to engage with him. No one in the California legislature spoke to me about my concerns either.
Exactly who are these “hundreds of California businesses engaged in negotiations”? I went through the list of organizations that officially supported the bill, and there are not “hundreds” there. I mean, there is the guy who spread COVID disinfo. Is that who Farid is talking about? Or the organizations pushing moral panics about the internet? There are the California privacy lawyers. But where are the hundreds of businesses who are happy with the law?
We should celebrate the fact that California is home to the giants of the technology sector. This success, however, also comes with the responsibility to ensure that California-based companies act as responsible global citizens. The arguments in favor of AADC are clear and uncontroversial: we have a responsibility to keep our youngest citizens safe. Hyperbolic and alarmist claims to the contrary are simply unfounded and unhelpful.
The only one who has made “hyperbolic and alarmist” claims here is the dude who insists that “there is no longer any question” that the internet harms children. The only one who has made “hyperbolic and alarmist” claims is the guy who tells his students that recommendations are so evil you should stop using apps. The only one who is “hyperbolic and alarmist” is the guy who insists the things that age verification providers told me directly are “bizarre an unsubstantiated.”
Farid may have built an amazing tool in PhotoDNA, but it hardly makes him an expert on the law, policy, how websites work, or social science about the supposed harms of the internet.
In July of 1995, Time Magazine published one of its most regrettable stories ever. The cover just read “CYBERPORN” with the subhead reading: “EXCLUSIVE A new study shows how pervasive and wild it really is. Can we protect our kids—and free speech?” The author of that piece, Philip Elmer-Dewitt later admitted that it was his “worst” story “by far.”
The “new study” was from a grad student named Marty Rimm, and… was not good. The methodology was quickly ripped to shreds. Wired basically put together an entire issue’s worth of stories debunking it. Mike Godwin tore the entire study apart noting that it was “so outrageously flawed and overreaching that you can’t miss the flaws even on a cursory first reading.” Professors Donna Hoffman and Thomas Novak absolutely destroyed Time Magazine for the reporting around the study. And Brock Meeks did an analysis of how Rimm and his colleagues were able to fool so many people. Meeks also discovered that Rimm “was recycling his survey data for use in a marketing how-to book called The Pornographer’s Handbook: How to Exploit Women, Dupe Men, & Make Lots of Money.” Eventually, Rimm was called “The Barnum of Cyberporn.”
And yet… he got his Time Magazine cover.
And, that cover resulted in a huge moral panic over porn online. And that huge moral panic over porn online helped give Senator James Exon the ammunition he needed to convince others in Congress to support his Communications Decency Act as a way to clean up all that smut from the internet. (You may recognize the name of the Communications Decency Act from “Section 230 of the Communications Decency Act” or just “Section 230,” but that was actually a different bill—the Internet Freedom and Family Empowerment Act—that was written as an alternative to Exon’s CDA, but because Congress is gonna Congress, the two bills were simply attached to one another and passed together.)
Senator Exon, apparently inspired by the Time Magazine story, began downloading and printing out all of the porn he found on the internet and put it in a binder—referred to as Exon’s little blue book—to show other Senators and convince them to pass his CDA bill to stop the porn that he believed was polluting the minds of children. He succeeded.
The following year, the Supreme Court threw out the entirety of Exon’s CDA (leaving just Section 230, which was the IFFEA) in the Reno v. ACLU decision. As Justice Stevens wrote in the majority decision:
In order to deny minors access to potentially harmful speech, the CDA effectively suppresses a large
amount of speech that adults have a constitutional right to
receive and to address to one another. That burden on adult
speech is unacceptable if less restrictive alternatives would
be at least as effective in achieving the legitimate purpose
that the statute was enacted to serve.
He also wrote:
It is true that we have repeatedly recognized the governmental interest in protecting children from harmful materials. See Ginsberg, 390 U. S., at 639; Pacifica, 438 U. S., at
749. But that interest does not justify an unnecessarily
broad suppression of speech addressed to adults. As we
have explained, the Government may not “reduc[e] the adult
population . . . to . . . only what is fit for children.” Denver,
518 U. S., at 759 (internal quotation marks omitted) (quoting
Sable, 492 U. S., at 128).40 “[R]egardless of the strength of
the government’s interest” in protecting children, “[t]he level
of discourse reaching a mailbox simply cannot be limited to
that which would be suitable for a sandbox.” Bolger v.
Youngs Drug Products Corp., 463 U. S. 60, 74–75 (1983).
Stevens, in particular, called out as burdensome the idea that speech should be suppressed if a minor might somehow come across speech intended for adults.
Given the size of the potential audience for most messages,
in the absence of a viable age verification process, the sender
must be charged with knowing that one or more minors will
likely view it. Knowledge that, for instance, one or more
members of a 100-person chat group will be a minor—and
therefore that it would be a crime to send the group an indecent message—would surely burden communication among
adults.
He also noted that it would be “prohibitively expensive” for websites to verify the age of visitors. It also calls out undefined terms that can “cover large amounts of non-pornographic material with serious educational or other value.”
I raise all of this history to note that California’s recently passed bill, AB 2273, the Age Appropriate Design Act has basically every one of those things that the Supreme Court called out in the Reno decision. Here, let’s rewrite just some of the Reno decision for clarity. I did not need to change much at all:
In order to deny minors access to potentially harmful speech, the [AADC] effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another. That burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the legitimate purpose that the statute was enacted to serve.
Knowing that, for instance, some minors are likely to access a website—and therefore create liability for the website—would surely burden communication among adults.
The entire premise of AB 2273 is strikingly similar to the premise behind Exon’s CDA. Rather than a sketchy, easily debunked (but massively hyped up) research report from a grad student, we have a documentary from a British baroness/Hollywood filmmaker, which she insists proved to her that online services were dangerous for teens. The baroness now has made it her life’s mission to basically wipe out any adult part of the internet in the belief that it all needs to be safe for kids. Not based on any actual data, of course, but rather her strong feelings that the internet is bad. She’s produced a whole report about why spying on users to determine their age is a good thing. And she is a major backer of the bill in California.
She might not have a little blue book — and her laws may not have the same level of criminal liability that Exon’s did, but the general concept is the same.
You start with a moral panic about “the kids online.” Note that data will generally be missing. You just need a few out-of-context anecdotes to drum up fear and concern. Then, you insist that “Silicon Valley is against you” despite the fact that Silicon Valley has almost entirely stayed quiet in fighting these bills, because none of them want the inevitable NY Times headline about how they’re fighting back against this nice baroness filmmaker who just wants to protect the children.
But the overall argument is the same. There is some content online that is inappropriate for children, and we cannot rest until that is all gone, and the entire internet is safe for kids — even if that wipes out all sorts of useful content and services for adults, and creates a ton of unintended consequences. But, I’m sure we’ll get headline after headline about how we’ve saved the children.
So, if Governor Gavin Newsom decides to go forward and sign the bill into law, think of just how much taxpayer money is going to get wasted in court, for the courts to just point to Reno v. ACLU and point out that this law is way too burdensome and full of 1st Amendment problems.
If you thought cookie pop-ups were an annoying nuisance, just wait until you have to scan your face for some third party to “verify your age” after California’s new design code becomes law.
On Friday, I wrote about the companies and organizations most likely to benefit from California’s AB 2273, the “Age Appropriate Design Code” bill that the California legislature seems eager to pass (and which they refer to as the “Kid’s Code” even though the details show it will impact everyone, and not just kids). The bill seemed to be getting very little attention, but after a few of my posts started to go viral, the backers of the bill ramped up their smear campaigns and lies — including telling me that I’m not covered by it (and when I dug in and pointed out how I am… they stopped responding). But, even if somehow Techdirt is not covered (which, frankly, would be a relief), I can still be quite concerned about how it will impact everyone else.
But, the craziest of all things is that the “Age Verification Providers Association” decided to show up in the comments to defend themselves and insist that their members can do age verification in a privacy-protective manner. You just have to let them scan your face with facial recognition technology.
Really.
I’m not kidding:
First, we want to reassure you and your readers generally about anonymity. The purpose of the online age verification sector is to allow users to prove their age to a website, WITHOUT disclosing their identity.
This can be achieved in a number of ways, but primarily through the use of independent, third-party AV providers who do not retain centrally any of your personal data. Once they have established your age or age-range, they have no need (and under EU GDPR law, therefore no legal basis) to retain your personal data.
In fact, the AV provider may not have needed to access your personal data at all. Age estimation based on facial analysis, for example, could take place on your own device, as can reading and validating your physical ID.
First, I want to call out that they said “may not” need access to your personal data. Which is very different from “does not” or “will not.”
Also, they insist it’s not “facial recognition” software because it’s not matching you up to a database of your identity… it’s just using “AI” to guess estimate your age. What could possibly go wrong?
But, more to the point, they’re basically saying “don’t worry, you’ll just need to scan your face or ID for every website your visit.” Normalizing facial scans does not seem particularly privacy protecting or reasonable. It seems pretty dystopian, frankly.
We’ve already just gone through this nonsense earlier this year when the IRS was demanding facial scans, and it later came out that — contrary to claims about privacy and the high quality of the facial verification technology — the technology was incredibly unreliable and the vendor in question’s public claims about the privacy tools were bogus.
Honestly, this whole thing is bizarre. The idea that we need facial scans to surf the internet is just crazy, and I don’t see how that benefits kids at all. (Also, does this mean you can only surf the web on PCs that have webcams, now? Do public libraries and internet cafes have to equip every machine with a camera?)
This morning, they’re in the comments again, trying (and failing) to defend this argument that it’s nothing to worry about. When people point out that such a system can be gamed, they have an answer… “we’ll just make you take a video of yourself saying phrases, too.” I mean WHAT?
For some higher risk use cases, the age check may involve a liveness test where the user must take several selfie photos or record a short video saying phrases requested by the provider. Passive liveness technology has further reduced the effort required by the user – do look into that.
They’re also cautioning against the claims that you’d have to scan all the time. If you’re “low risk,” according to them, you might only have to have your face scanned every three months. What a bargain.
How often you need to prove it is still the same user who did the check is a matter for the services themselves and their regulators. Some low risk uses might only check every three months – higher risk situations might double check it is still you each time you make a purchase.
Also, they’re saying that if Techdirt is going to publish “content that is potential harmful to kids” (as we’ve described, the standard “harmful to kids” is never clearly defined in the bill, and could easily apply to our stories on civil rights abuses among other things), these Age Verification providers have a solution: just redesign Techdirt to put those stories in the “adult section.”
Unless techdirt carries content that is potentially harmful to kids, it woud not need to apply age assurance. If some content is potentially harmful, this could be put in a sub-section of the site where adult users who wish to access it would use an age check – but probabably the same one they did 3 weeks ago when downloading a new 18 rated video game.
All of this is nonsense.
Once again, everything about this bill assumes everyone providing internet services is inherently up to no good, and that every kid who uses the internet is damaged by it. That’s not even remotely true. There are ways to deal with the actual problems without ruining the internet for everyone. But that’s not the approach California is taking.