Utah, as a state, has a pretty long history of having terrible policy proposals regarding laws about the internet. And now it’s getting dumber. On Monday, the state’s Attorney General Sean Reyes and Governor Spencer Cox, hosted a very weird press conference. It was billed by them as an announcement about how Utah is suing all the social media companies for not “protecting kids.” Which is already pretty ridiculous. Even more ridiculous, is that Governor Cox’s audience eagerly announced that people should watch the livestream… on social media.
Even more ridiculous: I kept expecting them to announce the details of the actual lawsuit, but it turns out that they haven’t even hired lawyers, let alone planned out the lawsuit. The official announcement notes that they’re putting out a request for proposal to find the most ridiculous law firm possible to file the suit.
Specifics of any legal action are not being released at this time. A Request for Proposal (RFP) document will be submitted this week to prepare for hiring outside counsel to assist with any litigation that could soon occur.
Can I reply to the RFP with a document that just says: “this is not how any of this works, and it makes Utah look like a clueless, anti-tech, anti-innovation backwater?” Cox has actually been surprisingly good on internet issues in the past, and seemed like he understood this stuff, but this kind of nonsense grandstanding makes him look really bad.
Again, the actual evidence regarding social media and children is at best inconclusive, and more likely shows that most kids actually get real value out of it as a way to keep in touch with more people, and get more access to valuable, useful information and people. A big look at basically all of the research on the “harm” of social media on kids found… no evidence to support the narrative.
And looking at the actual research we see the same thing again and again. Oxford did a massive study, looking at over 12,000 kids, and found that social media had effectively zero impact on the health and well being of children. A few years ago, a study (again, looking at multiple studies) noted that the emerging consensus view was that social media didn’t harm kids.
Just recently, we covered a pretty massive Pew Research Center study that surveyed over 1,300 teenagers, and found that, not only was social media not causing harm, it appeared to be providing real value to many of them.
And, whether or not you trust Facebook’s own internal research, the leaked research that the company did on whether or not Facebook and Instagram made kids feel worse about themselves, found that on nearly all issues, it actually made them feel better about themselves:
So, just starting out, the entire premise of this lawsuit seems to be on a moral panic myth that is not supported by any actual evidence, which seems like a pretty dumb reason to file a lawsuit.
The reasons given in the announcement in Utah are the usual moral panic list of things that basically all teenagers face, and faced before the internet existed as well:
“Depression, eating disorders, suicide ideation, cutting, addictions, mass violence, cyberbullying, and other dangers among young people may be initiated or amplified by negative influences and traumatic experiences online through social media.
Except, it’s one thing to say that people using social media experience these things, because basically everyone is on social media these days. The real question is whether or not social media is somehow causing these things, and again, pretty much all of the actual studies say the answer is “no.” And, expecting anyone to be able to sort out which harms are caused by social media, let alone in a way that has legal liability, is ridiculous.
Also, many of these topics are way more complex than the simple analyses cover. We’ve talked before about the studies on eating disorders, for example. Multiple studies have shown that when social media tried to crack down on online discussions about eating disorders it actually made the problem worse, not better. That’s because the eating disorders aren’t caused by social media. The kids are dealing with them no matter what. So when the content is banned, kids find ways around the bans. They always do. And, in doing so, it made it more difficult for others to monitor those discussions, and it often destroyed more open communities where people were helping those who had eating disorders get the help they needed. So demands that websites “crack down” on such content are actually making things worse, and doing more harm to the kids than the websites were doing in the first place.
There’s evidence to suggest the same is true of suicide discussions as well.
All that is to say, this is complicated stuff, and a bunch of grandstanding politicians ignoring what the actual research says in order to generate misleading headlines for themselves are not helping. At all.
And that’s not even getting into what any possible lawsuit could claim. What legal violation is there here? The answer is that there’s none. It doesn’t mean that AG Reyes can’t hassle and annoy companies. But, there’s no actual legal, factual, or moral reason to do any of this. There are only bad reasons, based around Reyes and Cox wanting headlines playing off the moral panics of today.
For years now we’ve written about the problems of the UK’s latest (in a long line) of attempts to “Disneyfy” the internet with its Online Safety Bill. While the bill had faced some hurdles along the way, made worse by the ever-rotating Prime Minister position last year, there was talk last week that some more hardline conservatives wanted to jack up the criminal penalties in the bill for social media sites that don’t magically protect the children. And, while new Prime Minister Rishi Sinak had pushed back against this, in the end, he caved in.
Michelle Donelan, the Culture Secretary, has accepted changes to the Online Safety Bill that will make senior managers at tech firms criminally liable for persistent breaches of their duty of care to children.
One of the worst aspects of the bill in earlier forms — possible punishment for legal speech, if deemed harmful — remains out of the bill, but that’s little comfort based on these new criminal additions.
Tech platforms will also have a duty of care to keep children safe online. This will involve preventing children from accessing harmful content and ensuring that age limits on social media platforms – the minimum age is typically 13 – are enforced. Platforms will have to explain in their terms of service how they enforce these age limits and what technology they use to police them.
In relation to both of these duties, tech firms will have to carry out risk assessments detailing the threats their services might pose in terms of illegal content and keeping children safe. They will then have to explain how they will mitigate those threats – for example through human moderators or using artificial intelligence tools – in a process that will be overseen by Ofcom, the communications regulator.
That is, this is more or less California’s terrible law (which we were told was modeled on existing UK law, which was clearly not true if they’re now implementing this new law). Anyhow, the criminal liability part is absolutely ridiculous:
Even before the government conceded to backbench rebels on Monday, tech executives faced the threat of a two-year jail sentence under the legislation, if they hinder an Ofcom investigation or a request for information.
Now, they also face the threat of a two-year jail sentence if they persistently ignore Ofcom enforcement notices telling them they have breached their duty of care to children. In the face of tech company protests about criminal liability, the government is stressing that the new offence will not criminalise executives who have “acted in good faith to comply in a proportionate way” with their duties.
It’s one thing to say we won’t criminalize you for acting in good faith, but it’s another thing to have your freedom on the docket and have to litigate that you acted in good faith. And, these are government officials we’re talking about. They’re not exactly known for acting in “good faith” when demonizing tech companies.
Again, there are so many problems with the setup here that it’s difficult to know where to start. First off, as we’ve discussed, the narrative about the internet being harmful to children appears to be massively overstated, and there’s actual evidence that it’s actually helpful to way more children than who find it harmful. That doesn’t mean that we shouldn’t look to reduce the harms that seem to impact some (because of course we should!) but these bills are often written in a way that assumes all harm that comes to children is from social media and that there are no redeeming qualities to social media. And both of those things are false.
Second, if you have to do special protections “for the children,” you’re almost certainly leading to kids being put at even greater risk, because the whole framework forces websites to do age verification, which is a highly intrusive, privacy-diminishing effort that actually is harmful to children in and of itself (i.e., this law almost certainly violates the law, because the authors took no “duty of care” to make sure it protects children).
So, now, all children will be tracked and monitored online, exposing their private information to potential breach and, even worse, teaching them that constant surveillance is the norm.
As for the companies, the risk of not just huge fines, but now criminal liability will mean that all of the incentives are to over-block, and not allow anything even remotely controversial. This is not how you teach children to be good, contributing members of society. It’s how you make it so children are kept in the dark about how the world works, how to make difficult choices, and how to respond when they’re actually put in a dangerous scenario.
It is, again, exactly how the Great Firewall of China initially worked: not by telling service providers what to block, but making it known that any “mistakes” would lead to very strict punishment. The end result is massive over-blocking. And that means all sorts of useful content will get buried. Because if you’re an executive for one of these companies and you’re facing a literal prison sentence if you make the wrong choices, your focus is going to be on being super aggressive in blocking content, even if the actual evidence suggests doing so creates more harm than good.
Just as an example, there are plenty of stories about content being shared about eating disorders. Almost certainly, under the Online Safety Bill, most sites will work to hide all of that content. The problem is that this has been tried… and it backfires. As we covered in a case study, when sites like Instagram did this, kids figured out code language to talk about it all anyway, and (more importantly), it was found that having these groups more open allowed people to better come in and help kids recognize that they had a problem, and to get them help. Simply hiding all of the content doesn’t do that.
Once again, this could mean that kids will be put in greater danger, all because a bunch of prudish, stuffy politicians have no idea how people actually act, or how the internet actually works.
We’ve written a number of posts about the problems of KOSA, the Kids Online Safety Act from Senators Richard Blumenthal and Marsha Blackburn (both of whom have fairly long and detailed histories for pushing anti-internet legislation). As with many “protect the children” or “but think of the children!” kinds of legislation, KOSA is built around moral panics and nonsense, blaming the internet any time anything bad happens, and insisting that if only this bill were in place, somehow, magically, internet companies would stop bad stuff from happening. It’s fantasyland thinking, and we need to stop electing politicians who live in fantasyland.
KOSA itself has not had any serious debate in Congress, nor been voted out of committee. And yet, there Blumenthal admitted he was was actively seeking to get it included in one of the “must pass” year end omnibus bills. When pressed about this, we heard from Senate staffers that they hadn’t heard much “opposition” to the bill, so they figured there was no reason to stop it from moving forward. Of course, that leaves out the reality: the opposition wasn’t that loud because there hadn’t been any real public opportunity to debate the bill, and since until a few weeks ago it didn’t appear to be moving forward, everyone was spending their time trying to fend off other awful bills.
But, if supporters insist there’s no opposition, well, now they need to contend with this. A coalition of over 90 organizations has sent a letter to Congress this morning explaining why KOSA is not just half-baked and not ready for prime time, but that it’s so poorly thought out and drafted that it will be actively harmful to many children.
Notably, signatories on the letter — which include our own Copia Institute — also include the ACLU, EFF, the American Library Association and many more. It also includes many organizations who do tremendous work actually fighting to protect children, rather than pushing for showboating legislation that pretends to help children while actually doing tremendous harm.
I actually think the letter pulls some punches and doesn’t go far enough in explaining just how dangerous KOSA can be for kids, but it does include some hints of how bad it can be. For example, it mandates parental controls, which may be reasonable in some circumstances for younger kids, but KOSA covers teenagers as well, where this becomes a lot more problematic:
While parental control tools can be important safeguards for helping
young children learn to navigate the Internet, KOSA would cover older minors as well, and
would have the practical effect of enabling parental surveillance of 15- and 16-year-olds. Older
minors have their own independent rights to privacy and access to information, and not every
parent-child dynamic is healthy or constructive. KOSA risks subjecting teens who are
experiencing domestic violence and parental abuse to additional forms of digital surveillance
and control that could prevent these vulnerable youth from reaching out for help or support. And
by creating strong incentives to filter and enable parental control over the content minors can
access, KOSA could also jeopardize young people’s access to end-to-end encrypted
technologies, which they depend on to access resources related to mental health and to keep
their data safe from bad actors.
The letter further highlights how the vague “duty of care” standard in the bill will be read to require filters for most online services, but we all know how filters work out in practice. And it’s not good:
KOSA establishes a burdensome, vague “duty of care” to prevent harms to minors for a broad
range of online services that are reasonably likely to be used by a person under the age of 17.
While KOSA’s aims of preventing harassment, exploitation, and mental health trauma for minors
are laudable, the legislation is unfortunately likely to have damaging unintended consequences
for young people. KOSA would require online services to “prevent” a set of harms to minors,
which is effectively an instruction to employ broad content filtering to limit minors’ access to
certain online content. Content filtering is notoriously imprecise; filtering used by schools and
libraries in response to the Children’s Internet Protection Act has curtailed access to critical
information such as sex education or resources for LGBTQ+ youth. Online services would face
substantial pressure to over-moderate, including from state Attorneys General seeking to make
political points about what kind of information is appropriate for young people. At a time when
books with LGBTQ+ themes are being banned from school libraries and people providing
healthcare to trans children are being falsely accused of “grooming,” KOSA would cut off
another vital avenue of access to information for vulnerable youth.
And we haven’t even gotten to the normalizing-surveillance and diminishing-privacy aspects of KOSA:
Moreover, KOSA would counter-intuitively encourage platforms to collect more personal
information about all users. KOSA would require platforms “reasonably likely to be used” by
anyone under the age of 17—in practice, virtually all online services—to place some stringent
limits on minors’ use of their service, including restricting the ability of other users to find a
minor’s account and limiting features such as notifications that could increase the minor’s use of
the service. However sensible these features might be for young children, they would also
fundamentally undermine the utility of messaging apps, social media, dating apps, and other
communications services used by adults. Service providers will thus face strong incentives to
employ age verification techniques to distinguish adult from minor users, in order to apply these
strict limits only to young people’s accounts. Age verification may require users to provide
platforms with personally identifiable information such as date of birth and government-issued
identification documents, which can threaten users’ privacy, including through the risk of data
breaches, and chill their willingness to access sensitive information online because they cannot
do so anonymously. Rather than age-gating privacy settings and safety tools to apply only to
minors, Congress should focus on ensuring that all users, regardless of age, benefit from strong
privacy protections by passing comprehensive privacy legislation.
There’s even more in the letter, and Congress can no longer say there’s no opposition to the bill. At the very least, sponsors of the bill (hey, Senator Blumenthal!) should be forced to respond to these many issues, rather than just spouting silly platitudes about how we “must protect the children” when his bill will do the exact opposite.
Over the last week or so, I keep hearing about a big push among activists and lawmakers to try to get the Kids Online Safety Act (KOSA) into the year-end “must pass” omnibus bill. Earlier this week, one of the main parents pushing for the bill went on Jake Tapper’s show on CNN and stumped for it. And, the latest report from Axios confirms that lawmakers are looking to include it in the lameduck omnibus, or possibly the NDAA (despite it having absolutely nothing to do with defense spending).
The likeliest path forward for the bills is for them to be added to the year-end defense or spending bill. “We’re at a point where a combination of the victims, and the technology, make it absolutely mandatory we move forward,” Sen. Richard Blumenthal (D-Conn.), a sponsor of the Kids Online Safety Act, told reporters on Capitol Hill Tuesday.
“I think it’s going to move,” Stephen Balkam, CEO of the Family Online Safety Institute, said this week at an event in Washington. “I think it could actually go — it’s one of those very rare pieces of legislation that is getting bipartisan support.”
Anyway, let’s be clear about all this: the people pushing for KOSA are legitimately worried about the safety of kids online. And many of those involved have stories of real trauma. But their stumping for KOSA is misguided. It will not help protect children. It will make things much more dangerous for children. It’s an extraordinarily dangerous bill for kids (and adults).
Back in February, I detailed just how dangerous this bill is, in that it tries to deal with “protecting children” by pushing websites to more actively surveil everyone. Many of the people pushing for the bill, including the one who went on CNN this week, talk about children who have died by suicide. Which is, obviously, quite tragic. But all of it seems to assume (falsely) that suicide prevention is simply a matter of internet companies somehow… spying on their kids more. It’s not that simple. Indeed, the greater surveillance has way more consequences for tons of other people, including kids who also need to learn the value of privacy.
If you dig into the language of KOSA, you quickly realize how problematic it would be in practice. It uses extremely vague and fuzzy language that will create dangerous problems. In earlier versions of the bill, people quickly pointed out that some of the surveillance provisions would force companies to reveal information about kids to their parents — potentially including things that might “out” LGBTQ kids to their parents. That should be seen as problematic for obvious reasons. The bill was amended to effectively say “but don’t do that,” but still leaves things vague enough that companies are caught in an impossible position.
Now the end result is basically “don’t have anyone on your platform end up doing something bad.” But, how does that work in practice?
Advocates for the bill keep saying “it just imposes a ‘duty of care'” on platforms. But that misunderstands basically everything about everything. A “duty of care” is one of those things that sounds good to people who have no idea how anything works. As we’ve noted, a duty of care is the “friendly sounding way” to threaten free speech and innovation. That’s because whether or not you met your obligations is determined after something bad happened. And it will involve a long and costly legal battle to determine (in heightened circumstances, often involving a horrible incident) whether or not a website could have magically prevented a bad thing from happening. But, of course, in that context, the bad thing will have already happened, making it difficult to separate the website from the bad thing, and making it impossible to see whether or not the “bad thing” could have been reasonably foreseen.
But, at the very least, it means that any time anything bad happens that is even remotely connected to a website, the website gets sued and has to convince a court that it took appropriate measures. What that means in practice is that websites get ridiculously restrictive to avoid any possible bad thing from happening — in the process limiting tons of good stuff as well.
The whole bill is designed to do two very silly things: make it nearly impossible for websites to offer something new and, even worse, the bill looks to offload any blame on any bad thing on those websites. It especially seeks to remove blame from parents for failing to do their job as a parent. It is the ultimate “let’s just blame the internet for anything bad” bill.
As I noted a couple months ago, the internet is not Disneyland. We shouldn’t want to make it Disneyland, because if we do, we lose a lot. Bad things happen in the world. And sometimes there’s nothing to blame for the bad thing happening.
I don’t talk about it much, but in high school a friend died by suicide. It’s not worth getting into the details, but the suicide was done in a manner designed to make someone else feel terrible as well (and cast a pall of “blame” on that person — which was traumatic for all involved). But, one thing that was an important lesson is that if you spend all your time looking to blame people for someone’s death by suicide, you’re not going to do much good, and, in fact, it creates this unfortunate scenario where it encourages others to consider suicide as a way to “get back” at others. That’s not helpful at all. For anyone.
Unfortunately, people do die by suicide. And we should be focusing more effort on helping people get through difficult times, and making sure that therapy and counselling is available to all who need it. But trying to retroactively hold social media companies to account for those cases, because they enabled people to talk to each other, throws out so much useful and good — including all of the people who were helped to move away from potential suicidal ideation by finding a community or a tribe who better understood them. Or those who found resources to help them through those difficult times.
Under a bill like KOSA all of that becomes more difficult, while actively encouraging greater surveillance and less privacy. It’s not a good approach.
And it’s especially ridiculous for such a bill to be rushed through via a must-pass bill, rather than having the kind of debate and discussion that such a serious issue not only deserves, but requires.
But, of course, almost no one wants to speak out against KOSA, because the media and politicians trot out parents who went through a truly traumatic experience, and no one wants to be seen as the person who is said to be standing in the way of that. But the simple fact is that KOSA will not magically prevent suicides. It might actually lead to more. And it will do many other damaging things in the meantime, including ramping up surveillance, limiting the ability of websites to innovate, and making it much more difficult for young people to find and connect with actual support and friends.
Disneyland can be a fun experience for kids (and potentially a frustrating one for parents), but it’s a very controlled environment in which everything is set up to bend over backwards to be welcoming to children. And that’s great for what it is, but the world would kinda suck if everything was Disneyland. I mean, some countries have tried that, and it’s… not great, especially if you believe in basic freedoms.
Here’s the thing: Disneyland’s limits are great for a place to visit occasionally. As a vacation. But it’s not the real world. And we shouldn’t be seeking to remake the real world into Disneyland. And I think it’s especially true that most parents wouldn’t want to raise their kids in Disneyland and then send them out into the real world upon turning 18, and assuming they’ll be fully equipped to deal with the real world.
Yet that’s exactly what some busybody politicians (with support of the media) have been trying to do. They want to pass new laws that effectively demand that the internet act like Disneyland. Everything must be safe for kids. That means much greater surveillance and much less freedom… but “safe for kids.”
Except it’s not. Disneyland is fantasyland. It’s not real life. And we don’t train kids how to be thoughtful participants in society if we raise them in Disneyland.
I had a discussion recently about these bills — things like California’s Age Appropriate Design Code or Congress’s Kids Online Safety Act— where there are legitimate concerns about kids being safe online, but it seems like we ought to think about the digital world the same way we think about the real world. Parents have a role not just in limiting where kids can go when they’re young, but also giving kids the tools, as they grow, to handle various situations.
Sometimes when I talk about this people think I’m suggesting that parents should hover like a helicopter over their children when they’re online, or spy on everything they do online, but that’s not the answer either. That’s normalizing surveillance, and teaching kids that you don’t trust them. Instead, parents (and school teachers) can help kids learn how to use the internet appropriately at their age. That’s giving them guidance on where is safe, but also teaching them how they might sometimes come across unsafe areas online, with content that is not meant for them, and teaching them how to deal with it appropriately.
We do this already in the outside world, in which we try to teach children how to handle various situations. When you should be careful around strangers; when you should seek help from trustworthy adults. And, of course, when it’s appropriate and when it’s not appropriate for kids to be somewhere with or without supervision. That’s called being a parent.
What we don’t do is insist that we need to turn every shopping center into Disneyland. We rely on parents to teach kids how to deal with the real world at an age when they, the parents, decide what’s appropriate.
We can (and should) do this with the internet as well. Let kids know that not everything online is appropriate for them, and teach them how to alert parents or other trusted adults if things are clearly not right.
Nothing is perfect, obviously, and everyone can point to this or that horror story, but on the whole this system has worked well in the outside world, and it can and should work well on the internet. We don’t need to turn the internet into Disneyland. We can and should teach our kids how to appropriately use the internet, including how to deal with it when they come across questionable situations. That’s actually training kids how to become proper adults and how to deal with things, rather than raising them in Disneyland and expecting that it teaches them enough to handle the outside world on their own.
When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because… kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. Despite its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s highly experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.
In no particular order, here are five radical policy ideas baked into the AADC:
Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.
The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.
While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.
Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.
Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.
Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.
The problems with this approach should be immediately apparent. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year-olds and 2 year-olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.
Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.
(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).
Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:
The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.
After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.
A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?
I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?
We’ve already highlighted the many, many problems with the Online Safety Bill in the UK, which will be a massive attack on free speech, in that (among many other problems) it seeks to force websites to remove content even if it’s “lawful,” meaning that they will massively overcensor. As I’ve pointed out, this is exactly how the original Great Firewall of China began, with instructions from the government to remove “harmful” content or face consequences. The reaction, of course, was to remove anything that the government might consider to be harmful.
It should be no surprise, then, that some of the people backing the bill have literally cited China as an example of how this regulation can work.
Hey, UK policymakers, when you’re using China’s censorship regime as a positive example of what you’re trying to do, perhaps you’ve gone just a bit off track?
Anyway, the Online Safety Bill was briefly put on hold, following the Brexit of Boris Johnson, but it was quite clear that the leading candidate to replace him, Liz Truss, also supported this nonsense. While some of the others vying for the Prime Minister slot were much less welcoming of the Online Safety Bill, it was Truss who won out in the end (for now, at least), and while everyone’s distracted by the fact that someone else died in the UK, Truss is ready to move forward with the Online Safety Bill again.
“We will be proceeding with the Online Safety Bill,” Truss said. “There are some issues that we need to deal with. What I want to make sure is that we protect the under-18s from harm and that we also make sure free speech is allowed, so there may be some tweaks required, but certainly he is right that we need to protect people’s safety online.”
This is just so ridiculously ignorant and uninformed. The Online Safety Bill is a disaster in waiting and I wouldn’t be surprised if some websites chose to exit the UK entirely rather than continue to deal with the law.
It won’t actually protect the children, of course. It will create many problems for them. It won’t do much at all, except make internet companies question whether it’s even worth doing business in the UK.
This isn’t a surprise, but it’s still frustrating. Gavin Newsom, who wants to be President some day, and thus couldn’t risk misleading headlines that he didn’t “protect the children,” has now signed AB 2273 into law (this follows on yesterday’s decision to sign the bad, but slightly less destructive, AB 587 into law). At this point there’s not much more I can say about why AB 2273 is so bad. I’ve explained why it’s literally impossible to comply with (and why many sites will just ignore it). I’ve explained how it’s pretty clearly unconstitutional. I’ve explained how the whole idea was pushed for and literally sponsored by a Hollywood director / British baroness who wants to destroy the internet. I’ve explained how it won’t do much, if anything, to protect children, but will likely put them at much greater risk. I’ve explained how the company it will likely benefit most is the world’s largest porn company— not to mention COVID disinfo peddlers and privacy lawyers. I’ve explained how the companies supporting the law insist that we shouldn’t worry because websites will just start scanning your face when you visit.
None of that matters, though.
Because, in this nonsense political climate where moral panics and culture wars are all that matter in politics, politicians are going to back laws that claim to “protect the children,” no matter how much of a lie that is.
Newsom, ever the politician, did the political thing here. He gets his headlines pretending he’s protecting kids.
“We’re taking aggressive action in California to protect the health and wellbeing of our kids,” said Governor Newsom. “As a father of four, I’m familiar with the real issues our children are experiencing online, and I’m thankful to Assemblymembers Wicks and Cunningham and the tech industry for pushing these protections and putting the wellbeing of our kids first.”
The press release includes a quote from Newsom’s wife, who is also a Hollywood documentary filmmaker, similar to the baroness.
“As a parent, I am terrified of the effects technology addiction and saturation are having on our children and their mental health. While social media and the internet are integral to the way we as a global community connect and communicate, our children still deserve real safeguards like AB 2273 to protect their wellbeing as they grow and develop,” said First Partner Jennifer Siebel Newsom. “I am so appreciative of the Governor, Assemblymember Cunningham, and Assemblymember Wicks’ leadership and partnership to ensure tech companies are held accountable for the online spaces they design and the way those spaces affect California’s children.”
Except that the bill does not create “real safeguards” for children. It creates a massive amount of busywork to try to force companies to dumb down the internet, while also forcing intrusive age verification technologies on tons of websites.
It puts tremendous power in the hands of the Attorney General.
The bill doesn’t go into effect until the middle of 2024 and I would assume that someone will go to court to challenge it, meaning that what this bill is going to accomplish in the short run is California wasting a ton of taxpayer dollars (just as Texas and Florida did) to try to pretend they have the power to tell companies how to design their products.
It’s all nonsense grandstanding and Governor Newsom knows it, because I know that people have explained all this to him. But getting the headlines is more important than doing the right thing.
Hany Farid is a computer science professor at Berkeley. Here he is insisting that his students should all delete Facebook and YouTube because they often recommend to you things you might like (the horror, the horror):
Farid once did something quite useful, in that he helped Microsoft develop PhotoDNA, a tool that has been used to help websites find and stop child sexual abuse material (CSAM) and report it to NCMEC. Unfortunately, though, he now seems to view much of the world through that lens. A few years back he insisted that we could also tackle terrorism videos with a PhotoDNA — despite the fact that such videos are not at all the same as the CSAM content PhotoDNA can identify, which has strict liability under the law. On the other hand, terrorism videos are often not actually illegal, and can actually provide useful information, including evidence of war crimes.
Anyway, over the years, his views have tended towards what appears to be hating the entire internet because there are some people who use the internet for bad things. He’s become a vocal supporter of the EARN IT Act, despite its many, many problems. Indeed, he’s so committed to it that he appeared at a “Congressional briefing” on EARN IT organized by NCOSE, the group of religious fundamentalist prudes formerly known as “Morality in Media” who believe that all pornography should be illegal because nekked people scare them. NCOSE has been a driving force behind both FOSTA and EARN IT, and they celebrate how FOSTA has made life more difficult for sex workers. At some point, when you’re appearing on behalf of NCOSE, you probably want to examine some of the choices that got you there.
Last week, Farid took to the pages of Gizmodo to accuse me and professor Eric Goldman of “fearmongering” on AB 2273, the California “Age Appropriate Design Code” which he insists is a perfectly fine law that won’t cause any problems at all. California Governor Gavin Newsom is still expected to sign 2273 into law, perhaps sometime this week, even though that would be a huge mistake.
Before I get into some of the many problems with Farid’s article, I’ll just note that both Goldman and I have gone through the bill and explained in great detail the many problems with it, and even highlighted some fairly straightforward ways that the California legislature could have, but chose not to, limit many of its most problematic aspects (though probably not fix them, since the core of the bill makes it unfixable). Farid’s piece does not cite anything in the law (it literally quotes not a single line in the bill) and makes a bunch of blanket statements without much willingness to back them up (and where it does back up the statements, it does so badly). Instead, he accuses Goldman of not substantiating his arguments, which is hilarious.
The article starts off with his “evidence” that the internet is bad for kids.
Leaders have rightly taken notice of the growing mental health crisis among young people. Surgeon General Vivek Murthy has called out social media’s role in the crisis, and, earlier this year, President Biden addressed these concerns in his State of the Union address.
Of course, saying that “there is no longer any question” about the “nature of the harm to children” displays a profound sense of hubris and ignorance. There are in fact many, many questions about the actual harm. As we noted, just recently, there was a big effort to sort through all of the research on the “harms” associated with social media… and it basically came up empty. That’s not to say there’s no harm, because I don’t think anyone believes that. But the actual research and actual data (which Hany apparently doesn’t want to talk about) is incredibly inconclusive.
For each study claiming one thing, there are equally compelling studies claiming the opposite. To claim that “there is no longer any question” is, empirically, false. It is also fearmongering, the very thing Farid accuses me and Prof. Goldman of doing.
Just for fun, let’s look at each of the studies or stories Farid points to in the two paragraphs above, which open the article. The study about “body image issues” that was the centerpiece of the WSJ’s “Facebook Files” reporting left out an awful lot of context. The actual study was, fundamentally, an attempt by Meta to better understand these issues and look for ways to mitigate the negative (which, you know, seems like a good thing, and actually the kind of thing that the AADC would require). But, more importantly, the very survey that is highlighted around body image impact looked at 12 different issues regarding mental health, of which “body image” was just one, and notably it was the only issue out of 12 where teen girls said Instagram made them feel worse, not better (teen boys felt better, not worse, on all 12). The slide was headlined with “but, we make body image issues worse for 1 in 3 teen girls” because that was the only one of the categories where that was true.
And, notably, even as Farid claims that it’s “no longer a question” that Facebook “heightened body image issues,” it also made many of them feel better about body image. And, again, many more felt better on every other issue, including eating, loneliness, anxiety, and family stress. That doesn’t sound quite as damning when you put it that way.
The “TikTok challenges” thing is just stupid, and it’s kind of embarrassing. First of all, it’s been shown that a bunch of the moral panics about “TikTok challenges” have actually been about parents freaking out over challenges that didn’t exist. Even the few cases where someone doing a “TikTok challenge” has come to harm — including the one Farid links to above — involved challenges that kids have done for decades, including before the internet. To magically blame that on the internet is the height of ridiculousness.
I mean, here’s the CDC warning about it in 2008, where they note it goes back to at least 1995 (with some suggestion that it might actually go back decades earlier).
But, yeah, sure, it’s TikTok that’s to blame for it.
The link on the “sexualization of children on YouTube” appears to show the fact that there have been pedophiles trying to game YouTube comments, though a variety of sneaky moves, which is something that YouTube has been trying to fight. But it’s not exactly an example of something that is widespread or mainstream.
As for the last two, fearmongering and moral panics by politicians are kind of standard and hardly proof of anything. Again, the actual data is conflicting and inconclusive. I’m almost surprised that Farid didn’t also toss in claims about suicide, but maybe even he has read the research suggesting you can’t actually blame youth suicide on social media.
So, already we’re off to a bad start, full of questionable fear mongering and moral panic cherry picking of data.
From there, he gives his full-throated support to the Age Appropriate Design Code, and notes that “nine-in-ten California voters” say they support the bill. But, again, that’s meaningless. I’m surprised it’s not 10-in-10. Because if you ask people “do you want the internet to be safe for children” most will say yes. But no one answering this survey actually understands what this bill does.
Then we get to his criticisms of myself and Professor Goldman:
In a piece published by Capitol Weekly on August 18, for example, Eric Goldman incorrectly claims that the AADC will require mandatory age verification on the internet. The following week, Mike Masnick made the bizarre and unsubstantiated claim in TechDirt that facial scans will be required to navigate to any website.
So, let’s deal with his false claim about me first. He says that I made the “bizarre and unsubstantiated claim” that facial scans will be required. But, that’s wrong. As anyone who actually read the article can see quite clearly, it’s what the trade association for age verification providers told me. The quote literally came from the very companies who provide age verification. So, the only “bizarre and unsubstantiated” claims here are from Farid.
As for Goldman’s claims, unlike Farid, Goldman actually supports them with an explanation using the language from the bill. AB 2273 flat out says that “a business that provides an online service, product, or feature likely to be accessed by children shall… estimate the age of child users with a reasonable level of certainty.” I’ve talked to probably a half a dozen actual privacy lawyers about this, and basically all of them say that they would recommend to clients who wish to abide by this that they invest in some sort of age verification technology. Because, otherwise, how would they show that they had achieved the “reasonable level of certainty” required by the law?
Anyone who’s ever paid attention to how lawsuits around these kinds of laws play out knows that this will lead to lawsuits in which the Attorney General of California will insist that websites have not complied unless they’ve implemented age verification technology. That’s because sites like Facebook will implement that, and the courts will note that’s a “best practice” and assume anyone doing less than that fails to abide by the law.
Even should that not happen, the prudent decision by any company will be to invest in such technology to avoid even having to make that argument in court.
Farid insists that sites can do age verification by much less intrusive means, including simple age “estimation.”
Age estimation can be done in a multitude of ways that are not invasive. In fact, businesses have been using age estimation for years – not to keep children safe – but rather for targeted marketing. The AADC will ensure that the age-estimation practices are the least invasive possible, will require that any personal information collected for the purposes of age estimation is not used for any other purpose, and, contrary to Goldman’s claim that age-authentication processes are generally privacy invasive, require that any collected information is deleted after its intended use.
Except, the bill doesn’t just call for “age estimation,” it requires “a reasonable level of certainty” which is not defined in the bill. And getting age estimation for targeted ads wrong means basically nothing to a company. They target an ad wrong, big deal. But under the AADC, a false estimation is now a legal liability. That, by itself, means that many sites will have strong incentives to move to true age verification, which is absolutely invasive.
And, also, not all sites engage in age estimation. Techdirt does not. I don’t want to know how old you are. I don’t care. But under this bill, I might need to.
Also, it’s absolutely hilarious that Farid, who has spent many years trashing all of these companies, insisting that they’re pure evil, that you should delete their apps, and insisting that they have “little incentive” to ever protect their users… thinks they can then be trusted to “delete” the age verification information after it’s been used for its “intended use.”
On that, he’s way more trusting of the tech companies than I would be.
Goldman also claims – without any substantiation – that these regulations will force online businesses to close their doors to children altogether. This argument is, at best, disingenuous, and at worst fear-mongering. The bill comes after negotiations with diverse stakeholders to ensure it is practically feasible and effective. None of the hundreds of California businesses engaged in negotiations are saying they fear having to close their doors. Where companies are not engaging in risky practices, the risks are minimal. The bill also includes a “right to cure” for businesses that are in substantial compliance with its provisions, therefore limiting liability for those seeking in good faith to protect children on their service.
I mean, a bunch of website owners I’ve spoken to over the last month has asked me about whether or not they should close off access to children altogether (or just close off access to Californians), so it’s hardly an idle thought.
Also, the idea that there were “negotiations with diverse stakeholders” appears to be bullshit. Again, I keep talking to website owners who were not contacted, and the few I’ve spoken to who have been in contact with legislators who worked on this bill have told me that the legislators told them, in effect, to pound sand when they pointed out the flaws in the bill.
I mean, Prof. Goldman pointed out tons of flaws in the bill, and it appears that the legislators made zero effort to fix them or to engage with him. No one in the California legislature spoke to me about my concerns either.
Exactly who are these “hundreds of California businesses engaged in negotiations”? I went through the list of organizations that officially supported the bill, and there are not “hundreds” there. I mean, there is the guy who spread COVID disinfo. Is that who Farid is talking about? Or the organizations pushing moral panics about the internet? There are the California privacy lawyers. But where are the hundreds of businesses who are happy with the law?
We should celebrate the fact that California is home to the giants of the technology sector. This success, however, also comes with the responsibility to ensure that California-based companies act as responsible global citizens. The arguments in favor of AADC are clear and uncontroversial: we have a responsibility to keep our youngest citizens safe. Hyperbolic and alarmist claims to the contrary are simply unfounded and unhelpful.
The only one who has made “hyperbolic and alarmist” claims here is the dude who insists that “there is no longer any question” that the internet harms children. The only one who has made “hyperbolic and alarmist” claims is the guy who tells his students that recommendations are so evil you should stop using apps. The only one who is “hyperbolic and alarmist” is the guy who insists the things that age verification providers told me directly are “bizarre an unsubstantiated.”
Farid may have built an amazing tool in PhotoDNA, but it hardly makes him an expert on the law, policy, how websites work, or social science about the supposed harms of the internet.
During the 2020 campaign, there were a few times when candidate Joe Biden insisted he wanted to get rid of Section 230 entirely, though he made it clear he had no idea what Section 230 actually did. When I wrote articles highlighting all of this, I had some Biden supporters (even folks who worked on his campaign) reach out to me to say not to worry about it, that Biden wasn’t fully briefed on 230, and that if he became President, more knowledgeable people would be tasked to work on stuff, and the 230 stuff wouldn’t be an issue. I didn’t believe it at the time, and it turns out I was correct.
The White House has released a truly bizarre set of “Principles for Enhancing Competition and Tech Platform Accountability” that are so poorly thought out that I’m confused as to how anyone in the White House thought these were good ideas. First of all, they’re mostly silly simplistic platitudes that don’t take into account the complexities of each of these items. They’re perhaps red meat for the “big tech bad!” crowd, but not even in a coherent way.
Some of them don’t make sense at all and are incoherent. Some of them buy into disinformation (which is depressingly ironic, as the White House argues that some of this is about fighting disinformation).
It’s a really weird list in that it just… isn’t that sophisticated or well thought out at all. It looks kinda like no one seriously worked on this issue, or really spoke to that many experts about it, and then scrambled together something at the last minute to make sure they had something they could roll out before the mid-terms as a “we’re taking on big tech” platform.
Let’s go through them, though out of order, to start with the most egregious nonsense here: removing Section 230.
Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials. The President has long called for fundamental reforms to Section 230.
I mean… how the hell can the White House say this?
Section 230 is NOT SPECIAL PROTECTIONS FOR LARGE PROVIDERS. That’s a lie. Mostly made up by disgruntled Republicans who want websites to be forced to carry their propaganda and disinformation through must-carry provisions.
Section 230 is not “special legal protections.” It’s codification of some common law liability principles. And it’s not “for large tech platforms.” It does much more to protect users’ speech and smaller companies then protecting large companies.
On top of that, the line that it “broadly shields them from liability even when they host or disseminate illegal, violent conduct or materials” is oddly worded and basically nonsense. First of all, you’d think the White House, of all places, would be aware that Section 230 includes section (e)(1) that notes that there is no effect on criminal law. So, um, if it’s “illegal,” Section 230 does not help. Second, Section 230 protects companies from being held liable for someone else’s speech. I’m not sure what “violent conduct or materials” has to do with any of that.
More to the point: if there is “illegal, violent conduct or materials,” then, um, isn’t it law enforcement’s job to go after those actually breaking the law? In the end, all 230 is really doing is saying “don’t blame the tool, blame the person actually violating the law.”
Also, as we’ll get to, much of this document talks about enabling more competition. Removing Section 230 does the exact opposite. The big tech companies literally have buildings full of lawyers and massive content moderation teams. They’re better positioned than others to handle the burden that removing Section 230 would create.
Startups? Mid-sized companies? Removing Section 230 would kill them.
It’s such a nonsensical position for the White House to take. There certainly must be some people in the White House who understand Section 230. Why weren’t they invited to weigh in on this?
Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.
Ah, yes, the ever present “but think of the children” issue. Again, this is vague and unclear, but it sounds an awful like the “age appropriate design code” that California just passed, which has all sorts of problems (including constitutional ones). As we recently explained, back in 1996 Congress tried to pass sweeping “but think of the kids online” legislation, and the Supreme Court rightly threw it in the trash. We don’t need to go through that again, no matter how politically popular this seems to whichever political consultants insisted it be included.
Either way, the devil’s in the details here, and this vague statement has none. The fact that the language sounds so similar to the California Kids Code means it’s likely exactly that. And that’s a problem — and as we’ve seen with the muted opposition to the California bill, it’s one that is politically popular because no one wants to be branded as being “against protecting the kids,” even if these bills don’t do anything to actually protect kids, but do help rich people with savior complexes think they’re helping.
Increase transparency about platform’s algorithms and content moderation decisions. Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans’ lives and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.
Here’s another one that sounds good as a platitude, but the reality is much different. As we’ve said over and over again, transparency is good, but mandated transparency creates all sorts of problems. Again, we can look to the terrible, terrible problems with California’s transparency bill, demanding this kind of transparency often only serves to help bad actors learn how to game your systems.
People pushing for these kinds of transparency mandates have clearly never actually run a website that has user content on it. It’s a constant struggle, and a dynamic one, where bad actors are always, always, always trying to game your system. And if you’re forced to publish clear rules on how you moderate it does two terribly dangerous things. First, it gives those bad actors a road map for how to game your system and limits your ability to change on the fly to deal with the changing nature of the attacks.
And let’s not even get into how this same policy is being pushed for by Republicans as a tool to block websites from moderating disinformation. Already, Texas and Florida have tried to pass content moderation bills that have (so far…) been found to be unconstitutional — and parts of those bills were pitched using this exact same language, how they were really about “transparency” regarding moderation, and how they just wanted the companies to be “less opaque” about how they made their decisions. Except that those laws also came with the stick of liability.
It’s so weird to see GOP nonsense talking points that have already been deemed unconstitutional showing up in an official White House policy document coming out of the Biden administration.
Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.
Again, this is one of those things that sounds good, but tends to be problematic in practice. Last year I wrote about a bill that attempted to do this, where I noted that it seemed entirely mistargeted, and (see a pattern here?) seemed based on a near total lack of understanding of how things work. The issue, again, is that the people most vocally claiming “algorithmic discrimination” are actually… disinformation peddlers, insisting that they’re being discriminated against not for peddling disinformation, but because they’re Christian white male conservatives.
So, uh, yeah, be careful what you wish for.
There are, of course, legitimate concerns about algorithms that use historically biased data to further continue a bias against marginalized platforms, but there are ways to deal with that without broadly outlawing “discrimination” via algorithms. Because that is the kind of thing that will be weaponized.
Also, as we noted in that post, it’s often quite difficult to separate out “discriminatory algorithmic decision-making” from more traditional discriminatory human decision-making, and there’s a real risk here that a bill of this nature starts holding tech companies responsible for bigotry by humans making decisions, rather than actual problems in the algorithm.
Provide robust federal protections for Americans’ privacy. There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.
So, yeah, sure. We need a federal privacy law. But the details here matter quite a lot, and the details in this vague paragraph suggest that whoever put this together… hasn’t actually thought through any of the details and the associated tradeoffs. For example, again, the final item we’ll go over in these principles is one about competition, but how do privacy laws and competition interact? The fact is that many of the proposed privacy bills would only be helping the largest companies, since they’ll be able to put in place the necessary compliance regimes, while the smaller competitors will be overwhelmed by it.
Also, it’s slightly weird to limit targeted advertising. I get that people hate advertising, but… I’d also kinda rather have advertising be better targeted so that it’s actually more useful to me than not? As I keep saying over and over again, privacy is about a set of trade-offs: how much am I willing to give up to get what kind of benefit. And the problem tends to come in not when I’m just handing off information, but when there’s a mismatch (or lack of clarity) in how much information I’m giving up and what exactly is the benefit I’m giving up in return. But, if I had more visibility and control over that — for example the ability to better target useful ads to myself by seeing what advertisers see about me, and having some control over what info is included in whatever “profile” they have on me — then that’s not a privacy violation to me any more. That allows me to customize things in a way where I’m comfortable and I even get relevant and useful ads.
But, again, so many in the privacy realm refuse to even consider that world a possibility, and simply want to cut off even the ability for me to enable that kind of world. Instead, they want to stop targeted ads entirely. Even for people who want them. And that seems… not all that helpful?
Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in the development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage. We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.
This is the first one on the list, and it’s probably the one I have the least complaints about — except that, again, the devil is in the details. So far the bill that has gotten the farthest on this front, AICOA, is so poorly drafted that it basically would allow it to be a content moderation bill in disguise, where disinformation peddlers would be able to use provisions in the law to claim, disingenuously, that moderation for, say, disinformation, was actually being done in an anti-competitive manner.
Indeed, as noted above, many of the other provisions in this platform are, themselves, anti-competitive, in that they would create massive compliance costs that the biggest providers could shoulder, but everyone else would be left out in the cold.
It is increasingly difficult for me to take any policymakers seriously when they refuse to look at how competition, privacy, content moderation, and much, much, more are interconnected, and how movements you make on one impact others.
All of these proposals (and the bills they likely refer to) are half-baked performative ideas that make for great headlines, but show a real lack of understanding how the world actually works and how these changes will flow through the internet ecosystem.
That this is the best the Biden White House can put out after 20 months in office is kind of a condemnation of the administration’s tech policy chops. They seem to have very few actual experts on board who could better inform these discussions. And thus… we get this.
It’s performative. It creates headlines that maybe sound good. But it doesn’t solve any of the real problems.