Ah, the good old days of the internet – a utopian paradise where everyone was kind, respectful, and definitely not arguing about Hitler. Or was it? A recent study published in Nature has some surprising findings that might just shatter your rose-tinted glasses about this past internet that never actually existed. Brace yourself for a shocking revelation: the internet has always been a bit of a dumpster fire.
There is a tendency in all things to assume that everything is progressively getting worse and everything is falling apart in a way that is uniquely new. And yet, history keeps telling us that it’s not true. Violent crime rates? They’re hitting historic lows, despite what you may have heard. The wave of shoplifting? Probably didn’t happen.
And how about the internet? Is the internet awash in hate, disinfo, and toxicity way more than in the good old days?
Well, nope.
Not according to the study. It exists, certainly, but it’s no worse than in the past.
The researchers went deep:
To obtain a comprehensive picture of online social media conversations, we analysed a dataset of about 500 million comments from Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube, covering diverse topics and spanning over three decades
Three decades, 500 million comments, eight platforms. Seems like a good place to start.
The team used Google’s Perspective API for classifying toxicity. Some may quibble with this, but the Perspective API has a history of being pretty reliable. Nothing is perfect, but when dealing with this much data, it seems like a reasonable approach. On top of that, they spot checked the results as well.
The researchers found: Godwin’s Law is legit. If you’ll recall, the original formulation is: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” Godwin himself admits it was written in the form of statistical language as a joke to make it seem more scientific. And the researchers determined that, well, yeah, pretty much:
The toxicity of threads follows a similar pattern. To understand the association between the size and toxicity of a conversation, we start by grouping conversations according to their length to analyse their structural differences. The grouping is implemented by means of logarithmic binning (see the ‘Logarithmic binning and conversation size’ section of the Methods) and the evolution of the average fraction of toxic comments in threads versus the thread size intervals is reported in Fig. 2. Notably, the resulting trends are almost all increasing, showing that, independently of the platform and topic, the longer the conversation, the more toxic it tends to be.
That said, the research also shows that when a thread gets toxic, that doesn’t necessarily stop the conversation.
The common beliefs that (1) online interactions inevitably devolve into toxic exchanges over time and (2) once a conversation reaches a certain toxicity threshold, it would naturally conclude, are not modern notions but they were also prevalent in the early days of the World Wide Web. Assumption 2 aligns with the Perspective API’s definition of toxic language, suggesting that increased toxicity reduces the likelihood of continued participation in a conversation. However, this observation should be reconsidered, as it is not only the peak levels of toxicity that might influence a conversation but, for example, also a consistent rate of toxic content. To test these common assumptions, we used a method similar to that used for measuring participation; we select sufficiently long threads, divide each of them into a fixed number of equal intervals, compute the fraction of toxic comments for each of these intervals, average it over all threads and plot the toxicity trend through the unfolding of the conversations. We find that the average toxicity level remains mostly stable throughout, without showing a distinctive increase around the final part of threads
I would suggest that seems consistent with Techdirt’s experience…
But, the study also found that there’s no particular evidence that conversations today are particularly more toxic than in the past when looking over this historical data. The key factor, as always, is just the length of the conversation. Average toxicity over time remains pretty constant. However, toxicity increases with the length of any conversation (though at different rates on different platforms).
As Dewey’s report notes, the approaches of different platforms can matter, but it doesn’t appear as if the world is somehow getting worse. It’s just people suck. And some platforms maybe attract more of the worst people.
That finding held true across seven of the eight platforms the team researched. By and large, those platforms also exhibited similar shares of toxic comments. On Facebook, for instance, roughly 4 to 6% of the sampled comments failed Perspective AI’s toxicity test, depending on the community/subject matter. On YouTube, by comparison, it’s 4 to 7%. On Usenet, 5 to 9%.
Even infamously lawless, undermoderated communities like Gab and Voat didn’t fall so far from the norm for more mainstream platforms: About 13% of Gab’s comments were toxic, the researchers found, and between 10 and 19% were toxic on Voat.
There’s something deeply unfashionable and counterintuitive about all of this. The suggestion that online platforms have not single-handedly poisoned public life is entirely out of step with the very political discourse the internet is said to have polluted.
Dewey also quotes one of the study’s authors, Walter Quattrociocchi, pointing out that this isn’t an argument for giving up moderating.
Quattrociocchi said it would be a mistake to assume his team’s findings suggest that moderation policies or other platform dynamics don’t matter — they absolutely “influence the visibility and spread of toxic content,” he said. But if “the root behaviors driving toxicity are more deeply ingrained in human interaction,” than effective moderation might involve both removing toxic content and implementing larger strategies to “encourage positive discourse,” he added.
Interventions do matter, but the internet isn’t inherently making people terrible. And, I guess that’s a bit of good news these days?
Their latest effort digs deep into dubious claims made by up-and-coming surveillance tech company, Flock. Flock made its entry into the market by pitching its automatic license plate readers to the biggest assholes in the private sector: homeowners associations and gated communities.
Pitching to private individuals and firms made it easy to dodge constitutional concerns. And it made it very easy for their customers to regulate traffic in their neighborhoods by engaging in always-on surveillance of everyone traveling “their” streets.
Flock then moved on to the next biggest set of assholes in the nation: law enforcement. Citing its success in the private surveillance market, Flock started partnering with cops shops to place ALPR cameras wherever law enforcement felt they might be useful. Generally speaking, this worked out the way it has for any surveillance tech product pitched to cops: neighborhoods with large minority presences were the ones immediately blanketed by this tech.
Flock is a business. Therefore, it makes sense it might play a little fast and loose with facts to secure sales. But once you become a preferred government contractor, you’re expected to adhere to the truth a bit better because it’s the public’s money that’s being spent.
No problem, said Flock. Here’s a study that says our ALPRs are helping drive crime down.
In a typical agency, one additional Flock Safety License Plate Recognition (LPR) camera per sworn officer correlates with a 9.1% increase in clearance rate.
20 additional Flock customers within 50 kilometers of the original agency leads to a 1% increase in clearance rates.
Broad access to Flock technology within an agency leads to improved case clearance outcomes.
That’s what Flock says its tech can do. To buttress its claims, Flock claims the study is based on input from “independent criminology research experts” from Texas Christian University (TCU) and the Tyler campus of the University of Texas.
Sounds great. But it isn’t. As Jason Koebler’s extensive debunking for 404 Media notes, Flock’s claims were immediately criticized by others in the criminology field, which apparently only extends Flock’s streak of dubious law enforcement effectiveness claims.
The most outrageous claim is that Flock’s ALPRs are “instrumental in solving 10 percent of reported crime in America.” Whoa if true but it definitely isn’t. And Flock’s willingness to massage stats and reject findings it doesn’t care for has resulted in TCU’s Johnny Nhan — one of the TCU researchers involved in the latest Flock study — to publicly state he would have handled things very differently if he had known how Flock was abusing his research and preventing him from finding facts that didn’t support Flock’s marketing narrative.
Communications between Flock, researchers, and law enforcement agencies obtained by 404 Media — along with Nhan’s own statements — make it clear Flock chose to go to press as soon as it had the data it wanted, even if the data did not meaningfully depict Flock’s contribution to public safety.
In an email exchange with 404 Media, Nhan said that he and Helfers were brought into the study late in the process, that he has concerns with how the research was framed, and that he and Helfers are working on future research with Flock that is more qualitative in nature and will focus on case studies rather than quantitative analysis.
“Dr. Helfers and I are working on a paper that is still in the development stages and is still evolving as we’re looking at what data is available to us,” Nhan said. “As of right now, the plan is a peer-reviewed paper that looks at the uses of flock ALPR technologies, how agencies are ensuring privacy, and what policies they have in place for that. The information that is collected by the police departments are too varied and incomplete for us to do any type of meaningful statistical analysis on them so we’re pivoting to interviews with a sample of agencies.”
Nhan was told by Flock that the data collected so far was only meant to be a “starting point” for further research. But it liked what it saw enough to push forward with press releases selectively quoting Nhan and his research — claims that included its supposed “ten percent” contribution to the nation’s case clearance rates.
404 Media’s Koebler didn’t find any of this believable and went searching for background info. Flock is a private company but it’s in contact with plenty of public entities, including the two universities whose services it engaged to give its sales pitch a science-y backdrop. Most of these requests were denied, mainly due to Flock’s interference in the public records process.
But Koebler did come across some interesting communications, which suggest even researchers have been compromised by Flock’s insistence the research its funding deliver the results it’s seeking.
[T]he records I did get back show that Flock has been recommending which specific police departments Nhan and Helfers should talk to for a future research paper, and an email Nhan sent to one police department states that he would like to see data that shows a “big swing” in the data from before Flock’s adoption to after Flock’s adoption.
While it’s great to see Nhan distancing himself from this study, it also shows researchers feel at least some pressure to deliver the results Flock clearly wants. Other emails show Flock — via Andrea Korb (Flock’s director of policy) — informing the researchers that the company will provide a list of agencies they can approach for data. That included Flock steering researchers towards small towns with already-low crime rates, where any reduction would produce the “big swing” so clearly sought by the company for its “research.”
The entire article is worth a read. The communications Koebler managed to secure show a company willing to toss a lab coat on top of sensationalist stats gleaned from its preferred law enforcement agencies in order to sell its products to other law enforcement agencies. It also shows the company is willing to place itself between researchers and their research, as well as between public entities and the public by crafting contracts that give it the “right” to redact information and withhold documents requested by records requesters.
If Flock’s tech was really that shit hot, it wouldn’t need to do this. But it’s apparently no better than anything else on the market. Not that Flock is the first — or only — surveillance tech firm to engage in this sort of chicanery. Just because you read it at a site created for the sole purpose of reprinting press releases doesn’t mean it’s true. If only law enforcement agencies were willing to perform the sort of due diligence being done by this very small journalistic outfit before spending public money, we might all be a whole lot better off.
We’ve been covering, at great length, the moral panic around the claims that social media is what’s making kids depressed. The problem with this narrative is that there’s basically no real evidence to support it. As the American Psychological Association found when it reviewed all the literature, despite many, many dozens of studies done on the impact of social media on kids, no one was able to establish a causal relationship.
As that report noted, the research seemed to show no inherent benefit or harm for most kids. For some, it showed a real benefit (often around kids being able to find like-minded people online to communicate with). For a very small percentage, it appeared to potentially exacerbate existing issues. And those are really the cases that we should be focused on.
But, instead, the narrative that continues to make the rounds is that social media is inherently bad for kids. That leads to various bills around age verification and age gating to keep kids off of social media.
Supporters of these bills will point to charts like this one, regarding teen suicide rates, noting the uptick correlates with the rise of social media.
Of course, they seem to cherry pick the start date of that chart, because if you go back further, you realize that while the uptick is a concern, it’s still way below what it had been in the 1990s (pre-social media).
In case that embed isn’t working, here’s an image of it:
Obviously, the increase in suicides is a concern. But, considering that every single study that tries to link it to social media ends up failing to do so, that suggests that there might be some other factor at play here.
The research summarizes the decline in “independent mobility” for kids over the last few decades:
Considerable research, mostly in Europe, has focused on children’s independent mobility (CIM), defined as children’s freedom to travel in their neighborhood or city without adult accompaniment. That research has revealed significant declines in CIM, especially between 1970 and 1990, but also some large national differences. For example, surveys regarding the “licenses” (permissions) parents grant to their elementary school children revealed that in England, license to walk home alone from school dropped from 86% in 1971 to 35% in 1990 and 25% in 2010; and license to use public buses alone dropped from 48% in 1971 to 15% in 1990 to 12% in 2010.11 In another study, comparing CIM in 16 different countries (US not included), conducted from 2010 to 2012, Finland stood out as allowing children the greatest freedom of movement. The authors wrote: “At age 7, a majority of Finnish children can already travel to places within walking distance or cycle to places alone; by age 8 a majority can cross main roads, travel home from school and go out after dark alone, by age 9 a majority can cycle on main roads alone, and by age 10 a majority can travel on local buses alone.” Although we have found no similar studies of parental permissions for US children, other data indicate that the US is more like the UK concerning children’s independent mobility than like Finland. For example, National Personal Transportation Surveys revealed that only 12.7% walked or biked to school in 2009 compared with 47.7% in 1969.
And then it notes the general decline in mental health as well, which they highlight started long before social media existed:
Perhaps the most compelling and disturbing evidence comes from studies of suicide and suicidal thoughts. Data compiled by the CDC indicate that the rate of suicide among children under age 15 rose 3.5-fold between 1950 and 2005 and by another 2.4-fold between 2005 and 2020. No other age group showed increases nearly this large. By 2019, suicide was the second leading cause of death for children from age 10 through 15, behind only unintentional injury. Moreover, the 2019 YRBS survey revealed that during the previous year 18.8% of US high school students seriously considered attempting suicide, 15.7% made a suicide plan, 8.9% attempted suicide one or more times, and 2.5% made a suicide attempt requiring medical treatment. We are clearly experiencing an epidemic of psychopathology among young people.
But, unlike those who assume correlation is causation with regards to social media, the researchers here admit there needs to be more. And they bring the goods, pointing to multiple studies that suggest a pretty clear causal relationship, rather than just correlation.
Several studies have examined relationships between the amount of time young children have for self-directed activities at home and psychological characteristics predictive of future wellbeing. These have revealed significant positive correlations between amount of self-structured time (largely involving free play) and (a) scores on two different measures of executive functioning; (b) indices of emotional control and social ability; and (c) scores, two years later, on a measure of self-regulation. There is also evidence that risky play, where children deliberately put themselves in moderately frightening situations (such as climbing high into a tree) helps protect against the development of phobias and reduces future anxiety by increasing the person’s confidence that they can deal effectively with emergencies.
Studies with adults involving retrospections about their childhood experiences provide another avenue of support for the idea that early independent activity promotes later wellbeing. In one such study, those who reported much free and adventurous play in their elementary school years were assessed as having more social success, higher self-esteem, and better overall psychological and physical health in adulthood than those who reported less such play. In another very similar study, amount of reported free play in childhood correlated positively with measures of social success and goal flexibility (ability to adapt successfully to changes in life conditions) in adulthood. Also relevant here are studies in which adults (usually college students) rated the degree to which their parents were overprotective and overcontrolling (a style that would reduce opportunity for independent activity) and were also assessed for their current levels of anxiety and depression. A systematic review of such studies revealed, overall, positive correlations between the controlling, overprotective parenting style and the measures of anxiety and depression.
They also note that they are not claiming (of course) that this is the sole reason for the declines in mental health. Just that there is strong evidence that it is a key component. They explore a few other options that may contribute, including increased pressure at schools and societal changes. They also consider the impact of social media and digital technologies and note (as we have many times) that there just is no real evidence to support the claims:
Much recent discussion of young people’s mental health has focused on the role of increased use of digital technologies, especially involvement with social media. However, systematic reviews of research into this have provided little support for the contention that either total screen time or time involved with social media is a major cause of, or even correlate of, declining mental health. One systematic review concluded that research on links between digital technology use and teens’ mental health “has generated a mix of often conflicting small positive, negative and null associations” (Odgers & Jensen, 2020). Another, a “review of reviews” concluded that “the association between digital technology use, or social media use in particular, and psychological well-being is, on average, negative but very small” and noted some evidence, from longitudinal research, that negative correlations may result from declining mental health leading to more social media use rather than the reverse (Orben, 2020)
Indeed, if this theory is true, that the lack of spaces for kids to explore and play and experiment without adult supervision is a leading cause of mental health decline, you could easily see how those who are depressed are more likely to seek out those private spaces, and turn to social media, given the lack of any such spaces they can go to physically.
And, if that’s the case, then all of these efforts to ban social media for kids, or to make social media more like Disneyland, could likely end up doing a lot more harm than good by cutting off one of the last remaining places where kids can communicate with their peers without adults watching over their every move. Indeed, the various proposals to give parents more access to what their kids are doing online could worsen the problem as well, taking away yet another independent space for kids.
Over the last few years, there’s been a push to bring back more “dangerous” play for kids, as people have begun to realize that things may have gone too far in the other direction. Perhaps it’s time we realize that social media fits into that category as well.
I know that we’ve already pointed to a whole bunch of studies, using a variety of different methods that all show no evidence of any link at all between social media and teen depression, but it’s time to highlight another one.
I mean, we just interviewed Professor Andy Przybylski, who published a study showing no evidence that Facebook and Facebook Messenger increase depression. That one comes four years after another study by him that showed no evidence that social media was making kids unhappy. In between then there have been a bunch of other studies. By 2020, the academic consensus appeared to be that social media wasn’t actually bad for kids. More recently, the American Psychological Association did a meta analysis of all the studies on this topic and said there was no evidence of a harmful link between social media and teen depression. Pew Research did a study showing that most kids found social media to be tremendously beneficial.
Even the Surgeon General’s recent report on social media — which has been widely reported by the media to say that social media was bad for kids — actually admitted that studies could find no link between mental health problems and social media. It still recommended acting as if there was a link just in case, but that seemed at odds with what the report admitted the science actually said.
Within-person changes in self- and other oriented social media behavior were unrelated to within-person changes in symptoms of depression or anxiety two years later, and vice versa. This null finding was evident across all timepoints and for both sexes. Conclusions: The frequency of posting, liking, and commenting is unrelated to future symptoms of depression and anxiety. This is true also when gold standard measures of depression and anxiety are applied.
As the authors of the paper note, many other papers trying to make this link have used perhaps questionable proxies for mental health:
A major shortcoming of existing research is that studies have conceptualized mental health problems in a variety of ways (e.g., reduced well-being, psychological distress, poor self-esteem, depressive symptoms). Because social media use may relate differently to different mental health problems (e.g., social anxiety versus overall well-being), these inconsistent findings may be due to studies not assessing the same phenomenon. Studies have also typically relied on self-reports of both social media use and mental health, thereby running the risk of inflating relations due to a common methods bias. Studies assessing more strictly defined mental health problems and measuring such problems by other means than self-report are needed.
The paper goes through a number of earlier papers that attempt to explore this subject, and notes that it’s trying to add to the literature with more concrete information regarding mental health. They did so by making use of a dataset which explored the mental health of children in Norway in great detail over an extended period of time:
The present inquiry is based on data from the Trondheim Early Secure Study (TESS), a longitudinal study of children’s mental health and psychosocial development starting at age 4 years. In 2007/2008, all children born four years earlier in Trondheim, Norway (N = 3,456) were invited to participate in the study. Their parents received an invitation letter together with the Strengths and Difficulties Questionnaire (SDQ) version 4–16) (Goodman, 1997), a mental health screening assessment, which they brought to the child’s 4-year health check at a community health center. At the check up, they were informed about the study by the health care nurse and gave their written consent to participate. Nearly all children attended the check-up (97.2%) and 82.2% of those who were asked to participate consented. To increase variance and thus statistical power, participants with higher scores on the SDQ were oversampled, which is accounted for in the analyses (i.e., weighting back to the population estimates). More specifically, children were allocated to four strata according to their SDQ scores (cut-offs: 0–4, 5–8, 9–11, and 12–40), and the probability of selection increased with increasing SDQ scores (0.37, 0.48, 0.70, and 0.89 in the four strata, respectively). Based on this procedure, 1,250 were selected to participate, and among these, 1007 (79.8%) met at the first assessment at the university clinic. Since then, biennial assessments have been conducted.
While covering just Norway, that’s still a pretty detailed and useful data set. The researchers note that the data set also included social media usage making it even more useful and relevant.
With all that data, it would be a great dataset to explore if greater usage of social media resulted in greater incidents of mental health issues. But the data… does not show that:
As can be seen, changes in self- and other oriented social media use did not predict changes in participants’ level of symptoms for depression, social anxiety, or generalized anxiety. There were also no significant effects in the opposite direction: changes in depression and anxiety symptoms did not forecast future levels of self- and other oriented social media behavior.
The researchers point to other research to suggest that, hey, maybe social media is just one form of communication, and that doesn’t change anything having to do with mental health:
Given the presumed mechanisms linking social media use and symptoms of depression and anxiety presented above, how come no within-person relations are revealed? As noted, according to the displacement theory (Kraut et al., 1998), increased social media use may decrease face-to-face interactions, potentially impairing mental health. However, a recent review concludes that social media is more likely to replace time spent on other media activities, rather than off-line interaction (Hall & Liu, 2022). In many cases, social media seems to complement rather than displace in-person interactions (Hall & Liu, 2022; Kushlev & Leitao, 2020; Requena & Ayuso, 2019); prospectively predicting more face-to-face interactions (Dienlin, Masur, & Trepte, 2017) and social capital (Hooghe & Oser, 2015). Adolescents even report feeling closer to their friends after using social media (Dredge & Schreurs, 2020; Pouwels, Valkenburg, Beyens, van Driel, & Keijsers, 2021).
Of course, this study got almost no press at all, as compared to the Surgeon General’s report, which got a ridiculous amount of press, almost all of it falsely reporting that the Surgeon General had found that social media is linked to depression.
And, this matters. We already have politicians repeating over and over again that it’s “proven beyond a doubt” that social media causes depression and that they have to regulate it. Even worse, you have a bunch of lawsuits from school districts, claiming that social media has destroyed the brains of kids in their schools.
It would be nice, just once, for any of the media or politicians to admit that the data doesn’t support these moral panic claims.
Professor Andrew Przybylski from the Oxford Internet Institute is one of the best, most important researchers out there providing thorough, comprehensive, empirical evidence that every tech moral panic is not supported by the data. We’ve covered his work before, including the complete lack of evidence that social media makes kids unhappy, how there’s actually some positive correlation between people playing video games and feeling better (the opposite of what most seemed to believe), and how mandatory internet filters to stop porn don’t work.
He’s now back with a new study (with Professor Matti Vuorre), and the scale of it is astounding:
The independent Oxford study used well-being data from nearly a million people across 72 countries over 12 years and harnessed actual individual usage data from millions of Facebook users worldwide to investigate the impact of Facebook on well-being.
I don’t think we’re going to have a small sample-size issue with this study. Indeed, the global nature of the study is useful as it gets beyond what many studies do, just looking at western college students who are readily accessible to academic researchers.
Overall, a country’s per capita daily active Facebook users predicted that nation’s demography-aggregated levels of positive experiences positively, and negative experiences negatively. In addition, the associations between countries were similar, but the uncertainty cutoff of 97.5% for posterior probabilities of direction was strictly only met for positive experiences (table 1). Associations between Facebook adoption and life satisfaction were less certain within countries, but stronger when comparing countries to each other. While these descriptive results do not speak to causal effects, they align with other findings suggesting that technology use has not become increasingly associated with negative psychological outcomes over time [8], and that the increased adoption of Internet technologies in general is not, overall, associated with widespread psychological harms [24]. We also found that Facebook adoption predicted young demographics’ positive well-being more strongly than it did older demographics’, and that sex differences in this dataset were very small and not credibly different from zero. These demography-based differences, and lack therein, were notable in light of previous literature that has reported young girls to be more at-risk of screen- and technology-based effects than young males (e.g. [27]; but see [28]). However, those studies focused on younger individuals (from 10 to 15 years old), which likely partly explains the different findings.
The authors are clear not to overstate what their paper is saying. They’re not arguing that “Facebook makes you happy” or anything like that. But they are saying that the evidence does not support the common refrain that it makes people unhappy.
And, in case you’re wondering, the authors are also clear that while they did get data from Facebook, it was not funded in any way by Facebook, nor did Facebook have any idea what their report would show until it was published.
Again, I know that the narrative that you hear about all the time insists otherwise, but it’s nice to see more data that again suggests we’re living through quite a ridiculous moral panic about the new new thing, which in all likelihood we’ll look back on as a silly thing, as ridiculous as moral panics about comic books, or pinball, or rock n’ roll, or radio, or chess, or the waltz (all of which faced moral panics).
Open access has been discussed many times here on Techdirt. There are several strands to its story. It’s about allowing the public to access research they have paid for through tax-funded grants, without needing to take out often expensive subscriptions to academic titles. It’s about saving educational institutions money that they are currently spending on over-priced academic journals, and which could be better spent elsewhere. It’s about helping to spread knowledge without the friction that traditional publishing introduces, ideally moving to licenses that allow academic research papers to be distributed freely and without restrictions.
But there’s another aspect that receives less attention, revealed here by a new paper that looks at how open access articles are used in a particular and important context – that of Wikipedia. There is a natural synergy between the two, which both aim to make access to knowledge easier. The paper seeks to quantify that:
we analyze a large dataset of citations from Wikipedia and model the role of open access in Wikipedia’s citation patterns. We find that open-access articles are extensively and increasingly more cited in Wikipedia. What is more, they show a 15% higher likelihood of being cited in Wikipedia when compared to closed-access articles, after controlling for confounding factors. This open-access citation effect is particularly strong for articles with low citation counts, including recently published ones. Our results show that open access plays a key role in the dissemination of scientific knowledge, including by providing Wikipedia editors timely access to novel results. These findings have important implications for researchers, policymakers, and practitioners in the field of information science and technology.
What this means in practice is that for the general public open access articles are even more beneficial than those published in traditional titles, since they frequently turn up as Wikipedia sources that can be consulted directly. They are also advantageous for the researchers who write them, since their work is more likely to be cited on the widely-read and influential Wikipedia than if the papers were not open access. As the research notes, this effect is even more pronounced for “articles with low citation counts” – basically, academic work that may be important but is rather obscure. This new paper provides yet another compelling reason why researchers should be publishing their work as open access as a matter of course: out of pure self interest.
We’ve written a lot about AB 2273, California’s Age Appropriate Design Code (AADC) that requires websites with users in California to try to determine the ages of all their visitors, write up dozens of reports on potential harms, and then seek to mitigate those harms. I’ve written about why it’s literally impossible to comply with the law. We’ve had posts on how it conflicts with privacy laws and how it’s a radical experimentation on children (ironically, the drafters of the bill insist that they’re trying to stop experimentation on children).
We’ve also written about how NetChoice, an internet company trade group, has sued to block the law as unconstitutional, and how I filed a declaration explaining how the law would violate the rights of both us at Techdirt and our users.
That lawsuit has continued to move forward, with California filing a pretty laughable reply saying that it doesn’t regulate speech at all. NetChoice has filed its own reply as well, highlighting how ridiculous that is:
The State claims that AB 2273 regulates data management—“nonexpressive conduct,” Opp. 11—not speech. Nonsense. AB 2273’s text expressly requires services to “mitigate or eliminate” risks that a child “could” encounter “potentially harmful … content” online. Content was the through-line in the legislative process: Defendant Attorney General Bonta praised the Act precisely because it would “protect children from … harmful material” and “dangerous online content”—in other words, speech—and Governor Newsom lauded the law for “protect[ing] kids” from harmful “content.” The State’s own expert, who mentions “content” in her declaration 71 times, derides preexisting laws specifically because they “only” cover data management, not content. Radesky Decl. ¶ 98. The State cannot evade the Constitution by pretending the Act regulates only “business practices … related to the collection and use of children’s personal information,” Opp. 11, when the law’s text, purpose, and effect are to regulate and shape online content. Like California’s last attempt to “restrict the ideas to which children may be exposed,” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 792, 794 (2011), AB 2273 violates the First Amendment
It appears that Governor Newsom may have realized how badly this case is going to go for him. Days after NetChoice filed that reply, Newsom sent NetChoice an angry letter demanding that it drop the case.
The text is quite remarkable… and bizarre. Newsom sounds… angry. Perhaps because he realizes (per the above) that his own words in support of the bill and how it should be used to block “content” are going to make him lose this case.
Enough is enough. In light of new action and findings released by the U.S. Surgeon General, I urge you to drop your lawsuit challenging California’s children’s online safety law.
Except, as we just detailed, the Surgeon General’s report does not find that the internet harms kids, and actually makes it clear that most kids benefit from social media. Straight from the report that it appears Newsom did not read:
A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
But, Newsom appears to have only read the headlines that misconstrue what’s in the actual report. His letter then goes into full on moral panic mode:
Every day as our children browse the internet to connect with one another, build community, and learn, they are also pushed to horrific content and exposed to data mining and location tracking. This reality is dangerous to their safety, mental health, and well-being. That’s why, last September, I was proud to sign the California Age-Appropriate Design Code Act — a bipartisan, first-in-the-nation law that protects the health and privacy of children using online platforms and prohibits online services from encouraging children to provide personal information.
Except, nearly everything in that paragraph is wrong. Embarrassingly so. There is no evidence that children are “pushed to horrific content.” It is true that there may be horrific content online, but the idea that companies are pushing kids to that content is not supported by the evidence. Furthermore, it’s rich that he’s complaining about “data mining and location tracking” while saying that this bill prohibits companies from seeking “personal information” from kids when the law’s “age assurance” requirements suggest the exact opposite. To comply with the law, websites will be effectively required to demand information from users to determine a likely age.
As I explained in my own declaration in the lawsuit, at Techdirt we have bent over backwards to learn as little about the folks who read our site as possible. But under the law, we will likely be compelled to institute a program in which we are required to determine the age of everyone who visits. In other words, the law requires more data mining, not less, and explicitly requires it for children.
Newsom continues the nonsense:
Rather than join California in protecting our children, your association, which represents major tech companies including Google, Meta, TikTok, and Twitter, chose to sue over this commonsense law. In your lawsuit, you have gone so far as to make light of the real harms our children face on the internet, trivializing this law as just being about teenagers who “say unkind things, insufficiently ‘like’ another’s posts,” or are unhappy about “the omission of a ‘trigger warning.’”
Again, nothing in this law actually protects children. Instead, it puts them at much greater risk of having information exposed, as we’ve noted. It will also make it next to impossible for children to research important information regarding mental health, or to find out the information they need to help them deal with things like eating disorders, since it will drive basically all of that content offline (at least where kids can reach it).
As for the claim that NetChoice is “trivializing this law,” that’s obviously bullshit to anyone who has read the filings in context (which apparently does not include this angry Governor Newsom). The references in that paragraph are in NetChoice’s motion for a preliminary injunction, but taken completely out of context. They’re not trivializing the issues children face: they’re pointing out that the way the law is drafted (i.e., very, very badly), it also applies to those more “trivial” situations. From the preliminary injunction filing:
AB 2273 also adopts a boundless conception of what speech must be restricted, including speech that cannot constitutionally be restricted even for minors. The requirement that services enforce their own policies, id. § 1798.99.31(a)(9), will lead them to suppress swaths of protected speech that the State could not restrict directly. See supra § IV.A.1.b. The bar on using algorithms and user information to recommend or promote content will restrict a provider’s ability to transmit protected speech based on the user’s expressed interests. And the law’s restrictions on content that might be “detrimental” or “harmful” to a child’s “well-being,” id. § 1798.99.31(a)(1)(b), (b)(1), (3)-(4), (7), could restrict expression on any topic that happens to distress any child or teen. This would include a range of important information children are constitutionally entitled to receive, such as commentary or news about the war in Ukraine, the January 6, 2021 insurrection at the United States Capitol, the 2017 “Unite the Right” rally in Charlottesville, school shootings, and countless other controversial, significant events.
More fundamentally, the “harm” the law seeks to address—that content might damage someone’s “well-being”—is a function of human communication itself. AB 2273 applies to, among other things, communications by teenagers on social media, who may say unkind things, insufficiently “like” another’s posts, or complain harshly about events at school; the use of language acceptable to some but not others; the omission of a “trigger warning”; and any other manner of discourse online. See, e.g., Mahanoy Area Sch. Dist. v. B. L., 141 S. Ct. 2038 (2021) (Snapchat post “fuck cheer” made high school students “visibly upset”)
So, no. The lawsuit is not trivializing harms children face by saying that it’s nothing more than kids saying unkind things, NetChoice is (accurately) pointing out that the broad language of the law means that it could be applied to those situations, rather than ones dealing with actual harm.
It’s pathetic and embarrassing that Newsom would imply that this paragraph was trivializing harms. His complete and total misread of what’s in the lawsuit is trivializing the seriousness of his state’s own law that is violating 1st Amendment rights.
Anyway, Newsom goes on:
Yet at the same time you are in court callously mocking this law, experts are confirming the known dangers of online platforms for kids and teens: Just days ago, the U.S. Surgeon General issued an advisory on the profound toll that social media takes on kids’ and teens’ mental health without adequate safety and privacy standards. Your association and its members may be interested to learn of the Surgeon General’s urgent findings about the sexual extortion of our children, and the alarming links between youth social media and cyberbullying, depression, suicide, and unhealthy and dangerous outcomes and behaviors.
Honestly, this is making me wonder if Newsom ever reads anything. Because, as we discussed that is not what the Surgeon General’s report says at all. It literally says that there are widespread benefits to social media and then says “we do not have enough evidence” regarding whether or not it’s harmful. It notes there are concerns, and some “correlational” studies, but nothing proving a causal link. It notes that we need more research on that point.
So how the hell is Newsom claiming that it is claiming there is a “profound toll” from social media? The report does not say that.
As for the “Surgeon General’s urgent findings about the sexual extortion of our children,” again Newsom is blatantly misstating what the report says. It notes that the internet has been used for sexual extortion, which is a fact, but nothing in the AADC will stop bad people from being terrible. The report does not say anything about this fact being “urgent” or requiring social media companies to magically make people stop being bad. It just mentions such things as the kind of problematic content that exists online.
As for the “alarming links between youth social media and cyberbullying, depression, suicide, and unhealthy and dangerous outcomes and behaviors” that’s AGAIN misreading the Surgeon General’s report. Again, it does mention those things, but does not discuss “alarming links.” It highlights correlational concerns again, and suggests further research and caution. But does not mention any sort of causal link, alarming or not.
In fact, with regards to cyberbullying, the Surgeon General’s recommendations talk about better educating teachers, parents, and children on how to deal with such things. And, its one policy recommendation around cyberbullying is not to force websites to censor content, as the AADC does, but rather to “support the development, implementation, and evaluation of digital and media literacy curricular in schools and within academic standards.”
In other words, what the Surgeon General is kinda saying is that our policy makers are the ones who have failed our kids by not teaching them how to be good digital citizens.
Governor Newsom, that one’s on you.
So, so far we have Newsom lying about the law, lying about the filings from NetChoice, and now lying about the Surgeon General’s report. I know it’s a post-truth political world we live in, but I expect better from California’s governor.
But he’s not done yet:
The harms of unregulated social media are established and clear.
The Surgeon General’s report — not to mention the even more thorough report from the American Psychological Association — literally say the opposite. They say it is not clear, and much more research needs to be done.
Governor Newsom, you should stop lying.
It is time for the tech industry to stop standing in the way of important protections for our kids and teens, and to start working with us to keep our kids safe.”
Stomping on 1st Amendment rights and lying about everything is not “keeping our kids safe” Governor.
Utah, as a state, has a pretty long history of having terrible policy proposals regarding laws about the internet. And now it’s getting dumber. On Monday, the state’s Attorney General Sean Reyes and Governor Spencer Cox, hosted a very weird press conference. It was billed by them as an announcement about how Utah is suing all the social media companies for not “protecting kids.” Which is already pretty ridiculous. Even more ridiculous, is that Governor Cox’s audience eagerly announced that people should watch the livestream… on social media.
Even more ridiculous: I kept expecting them to announce the details of the actual lawsuit, but it turns out that they haven’t even hired lawyers, let alone planned out the lawsuit. The official announcement notes that they’re putting out a request for proposal to find the most ridiculous law firm possible to file the suit.
Specifics of any legal action are not being released at this time. A Request for Proposal (RFP) document will be submitted this week to prepare for hiring outside counsel to assist with any litigation that could soon occur.
Can I reply to the RFP with a document that just says: “this is not how any of this works, and it makes Utah look like a clueless, anti-tech, anti-innovation backwater?” Cox has actually been surprisingly good on internet issues in the past, and seemed like he understood this stuff, but this kind of nonsense grandstanding makes him look really bad.
Again, the actual evidence regarding social media and children is at best inconclusive, and more likely shows that most kids actually get real value out of it as a way to keep in touch with more people, and get more access to valuable, useful information and people. A big look at basically all of the research on the “harm” of social media on kids found… no evidence to support the narrative.
And looking at the actual research we see the same thing again and again. Oxford did a massive study, looking at over 12,000 kids, and found that social media had effectively zero impact on the health and well being of children. A few years ago, a study (again, looking at multiple studies) noted that the emerging consensus view was that social media didn’t harm kids.
Just recently, we covered a pretty massive Pew Research Center study that surveyed over 1,300 teenagers, and found that, not only was social media not causing harm, it appeared to be providing real value to many of them.
And, whether or not you trust Facebook’s own internal research, the leaked research that the company did on whether or not Facebook and Instagram made kids feel worse about themselves, found that on nearly all issues, it actually made them feel better about themselves:
So, just starting out, the entire premise of this lawsuit seems to be on a moral panic myth that is not supported by any actual evidence, which seems like a pretty dumb reason to file a lawsuit.
The reasons given in the announcement in Utah are the usual moral panic list of things that basically all teenagers face, and faced before the internet existed as well:
“Depression, eating disorders, suicide ideation, cutting, addictions, mass violence, cyberbullying, and other dangers among young people may be initiated or amplified by negative influences and traumatic experiences online through social media.
Except, it’s one thing to say that people using social media experience these things, because basically everyone is on social media these days. The real question is whether or not social media is somehow causing these things, and again, pretty much all of the actual studies say the answer is “no.” And, expecting anyone to be able to sort out which harms are caused by social media, let alone in a way that has legal liability, is ridiculous.
Also, many of these topics are way more complex than the simple analyses cover. We’ve talked before about the studies on eating disorders, for example. Multiple studies have shown that when social media tried to crack down on online discussions about eating disorders it actually made the problem worse, not better. That’s because the eating disorders aren’t caused by social media. The kids are dealing with them no matter what. So when the content is banned, kids find ways around the bans. They always do. And, in doing so, it made it more difficult for others to monitor those discussions, and it often destroyed more open communities where people were helping those who had eating disorders get the help they needed. So demands that websites “crack down” on such content are actually making things worse, and doing more harm to the kids than the websites were doing in the first place.
There’s evidence to suggest the same is true of suicide discussions as well.
All that is to say, this is complicated stuff, and a bunch of grandstanding politicians ignoring what the actual research says in order to generate misleading headlines for themselves are not helping. At all.
And that’s not even getting into what any possible lawsuit could claim. What legal violation is there here? The answer is that there’s none. It doesn’t mean that AG Reyes can’t hassle and annoy companies. But, there’s no actual legal, factual, or moral reason to do any of this. There are only bad reasons, based around Reyes and Cox wanting headlines playing off the moral panics of today.
Right after the 2016 election that saw Donald Trump elected President, there was this collective wail among many who were unable to comprehend how this could have happened, searching for someone to blame. Two targets quickly emerged: social media and Russia. Often the two were combined into “Russian trolls on social media.” As we’ve noted, those Russian trolls certainly existed, and certainly were trying to influence the election, but it seemed dubious to us that they had any real effect. As we noted the day after the election, it was silly to claim that social media magically made people vote for Trump.
In the time since then, we’ve seen more and more evidence showing that the impact of social media was really not at all what many people seem to believe. We’ve talked about the studies that have, repeatedly, shown that cable news had way more of an impact than anything that came out of social media, not just for the election, but also for COVID disinfo.
Now there’s a very interesting new study, published in Nature with a long list of researchers (George Eady, Tom Paskhalis, Jan Zilinsky, Richard Bonneau, Jonathan Nagler, and Joshua Tucker), looking at whether or not Russian trolls on social media had any real impact on the 2016 election and the summary is no, they did not.
There is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.
Basically, yes, the trolls showed up and tried to sow discontent. But, the people who interacted with it were always going to vote for Trump anyway, and again, existing media was way, way, way more influential than the Russian trolls on social media.
The full report is all sorts of fascinating, and again shows how little impact the Russian trolls actually had. Especially compared to existing news media and US politicians.
The research does show that those who identified as “strongly Republican” were way more likely to encounter/interact with Russian propaganda, but that’s little surprise since that was a key (but not only) target of Russian propaganda. But, again, those individuals were never going to vote for Hillary Clinton in the first place. The study used various models to determine the impact on voting and found it basically negligible.
As estimates in the first panel indicate, the relationship between the number of posts from Russian foreign influence accounts that users are exposed to and voting for Donald Trump is near zero (and not statistically significant). This is the case whether the outcome is measured as vote choice in the election itself; the ranking of Clinton and Trump on equivalent survey questions across survey waves; and with the broader measure capturing whether voting behavior more generally favored Trump or Clinton through voting abstentions, changes in vote choice, or voting for a third party. The signs on the coefficients in each case are also negative, both for the count and binary measure, a result that would be inconsistent with a relationship of exposure being favorable to Trump. It is also worth noting that none of the other explanatory variables (with the exception of sex in some models) used as controls appear to be statistically significant predictors of the change in voting preferences
As the researchers conclude:
Taking our analyses together, it would appear unlikely that the Russian foreign influence campaign on Twitter could have had much more than a relatively minor influence on individual-level attitudes and voting behavior for four related reasons. First, we find that exposure to posts from Russian foreign influence accounts was concentrated among a small group of users, with only 1% of users accounting for 70% of all exposures. Second, exposure to Russian foreign influence tweets was overshadowed by the amount of exposure to traditional news media and US political candidates. Third, respondents with the highest levels of exposure to posts from Russian foreign influence accounts were those arguably least likely to need influencing: those who identified themselves as highly partisan Republicans, who were already likely favorable to Donald Trump. Fourth, we did not detect any meaningful relationships between exposure to posts from Russian foreign influence accounts and changes in respondents’ attitudes on the issues, political polarization, or voting behavior. Each of these findings is not independently dispositive. Jointly, however, we find concordant evidence between exposure to Russian disinformation—which is both lower and more concentrated than one might expect to be impactful—and the absence of a relationship to changes in attitudes and voting behavior.
The researchers do note that there are some limitations to their research (focused just on tweets, and just on identified Russia influence campaigns), but it does seem noteworthy.
This is a really useful addition to the research out there, though it’s not going to stop the, ahem, disinformation that social media magically impacted the election from continuing to spread. Even if that’s disinformation about disinformation.
Hany Farid is a computer science professor at Berkeley. Here he is insisting that his students should all delete Facebook and YouTube because they often recommend to you things you might like (the horror, the horror):
Farid once did something quite useful, in that he helped Microsoft develop PhotoDNA, a tool that has been used to help websites find and stop child sexual abuse material (CSAM) and report it to NCMEC. Unfortunately, though, he now seems to view much of the world through that lens. A few years back he insisted that we could also tackle terrorism videos with a PhotoDNA — despite the fact that such videos are not at all the same as the CSAM content PhotoDNA can identify, which has strict liability under the law. On the other hand, terrorism videos are often not actually illegal, and can actually provide useful information, including evidence of war crimes.
Anyway, over the years, his views have tended towards what appears to be hating the entire internet because there are some people who use the internet for bad things. He’s become a vocal supporter of the EARN IT Act, despite its many, many problems. Indeed, he’s so committed to it that he appeared at a “Congressional briefing” on EARN IT organized by NCOSE, the group of religious fundamentalist prudes formerly known as “Morality in Media” who believe that all pornography should be illegal because nekked people scare them. NCOSE has been a driving force behind both FOSTA and EARN IT, and they celebrate how FOSTA has made life more difficult for sex workers. At some point, when you’re appearing on behalf of NCOSE, you probably want to examine some of the choices that got you there.
Last week, Farid took to the pages of Gizmodo to accuse me and professor Eric Goldman of “fearmongering” on AB 2273, the California “Age Appropriate Design Code” which he insists is a perfectly fine law that won’t cause any problems at all. California Governor Gavin Newsom is still expected to sign 2273 into law, perhaps sometime this week, even though that would be a huge mistake.
Before I get into some of the many problems with Farid’s article, I’ll just note that both Goldman and I have gone through the bill and explained in great detail the many problems with it, and even highlighted some fairly straightforward ways that the California legislature could have, but chose not to, limit many of its most problematic aspects (though probably not fix them, since the core of the bill makes it unfixable). Farid’s piece does not cite anything in the law (it literally quotes not a single line in the bill) and makes a bunch of blanket statements without much willingness to back them up (and where it does back up the statements, it does so badly). Instead, he accuses Goldman of not substantiating his arguments, which is hilarious.
The article starts off with his “evidence” that the internet is bad for kids.
Leaders have rightly taken notice of the growing mental health crisis among young people. Surgeon General Vivek Murthy has called out social media’s role in the crisis, and, earlier this year, President Biden addressed these concerns in his State of the Union address.
Of course, saying that “there is no longer any question” about the “nature of the harm to children” displays a profound sense of hubris and ignorance. There are in fact many, many questions about the actual harm. As we noted, just recently, there was a big effort to sort through all of the research on the “harms” associated with social media… and it basically came up empty. That’s not to say there’s no harm, because I don’t think anyone believes that. But the actual research and actual data (which Hany apparently doesn’t want to talk about) is incredibly inconclusive.
For each study claiming one thing, there are equally compelling studies claiming the opposite. To claim that “there is no longer any question” is, empirically, false. It is also fearmongering, the very thing Farid accuses me and Prof. Goldman of doing.
Just for fun, let’s look at each of the studies or stories Farid points to in the two paragraphs above, which open the article. The study about “body image issues” that was the centerpiece of the WSJ’s “Facebook Files” reporting left out an awful lot of context. The actual study was, fundamentally, an attempt by Meta to better understand these issues and look for ways to mitigate the negative (which, you know, seems like a good thing, and actually the kind of thing that the AADC would require). But, more importantly, the very survey that is highlighted around body image impact looked at 12 different issues regarding mental health, of which “body image” was just one, and notably it was the only issue out of 12 where teen girls said Instagram made them feel worse, not better (teen boys felt better, not worse, on all 12). The slide was headlined with “but, we make body image issues worse for 1 in 3 teen girls” because that was the only one of the categories where that was true.
And, notably, even as Farid claims that it’s “no longer a question” that Facebook “heightened body image issues,” it also made many of them feel better about body image. And, again, many more felt better on every other issue, including eating, loneliness, anxiety, and family stress. That doesn’t sound quite as damning when you put it that way.
The “TikTok challenges” thing is just stupid, and it’s kind of embarrassing. First of all, it’s been shown that a bunch of the moral panics about “TikTok challenges” have actually been about parents freaking out over challenges that didn’t exist. Even the few cases where someone doing a “TikTok challenge” has come to harm — including the one Farid links to above — involved challenges that kids have done for decades, including before the internet. To magically blame that on the internet is the height of ridiculousness.
I mean, here’s the CDC warning about it in 2008, where they note it goes back to at least 1995 (with some suggestion that it might actually go back decades earlier).
But, yeah, sure, it’s TikTok that’s to blame for it.
The link on the “sexualization of children on YouTube” appears to show the fact that there have been pedophiles trying to game YouTube comments, though a variety of sneaky moves, which is something that YouTube has been trying to fight. But it’s not exactly an example of something that is widespread or mainstream.
As for the last two, fearmongering and moral panics by politicians are kind of standard and hardly proof of anything. Again, the actual data is conflicting and inconclusive. I’m almost surprised that Farid didn’t also toss in claims about suicide, but maybe even he has read the research suggesting you can’t actually blame youth suicide on social media.
So, already we’re off to a bad start, full of questionable fear mongering and moral panic cherry picking of data.
From there, he gives his full-throated support to the Age Appropriate Design Code, and notes that “nine-in-ten California voters” say they support the bill. But, again, that’s meaningless. I’m surprised it’s not 10-in-10. Because if you ask people “do you want the internet to be safe for children” most will say yes. But no one answering this survey actually understands what this bill does.
Then we get to his criticisms of myself and Professor Goldman:
In a piece published by Capitol Weekly on August 18, for example, Eric Goldman incorrectly claims that the AADC will require mandatory age verification on the internet. The following week, Mike Masnick made the bizarre and unsubstantiated claim in TechDirt that facial scans will be required to navigate to any website.
So, let’s deal with his false claim about me first. He says that I made the “bizarre and unsubstantiated claim” that facial scans will be required. But, that’s wrong. As anyone who actually read the article can see quite clearly, it’s what the trade association for age verification providers told me. The quote literally came from the very companies who provide age verification. So, the only “bizarre and unsubstantiated” claims here are from Farid.
As for Goldman’s claims, unlike Farid, Goldman actually supports them with an explanation using the language from the bill. AB 2273 flat out says that “a business that provides an online service, product, or feature likely to be accessed by children shall… estimate the age of child users with a reasonable level of certainty.” I’ve talked to probably a half a dozen actual privacy lawyers about this, and basically all of them say that they would recommend to clients who wish to abide by this that they invest in some sort of age verification technology. Because, otherwise, how would they show that they had achieved the “reasonable level of certainty” required by the law?
Anyone who’s ever paid attention to how lawsuits around these kinds of laws play out knows that this will lead to lawsuits in which the Attorney General of California will insist that websites have not complied unless they’ve implemented age verification technology. That’s because sites like Facebook will implement that, and the courts will note that’s a “best practice” and assume anyone doing less than that fails to abide by the law.
Even should that not happen, the prudent decision by any company will be to invest in such technology to avoid even having to make that argument in court.
Farid insists that sites can do age verification by much less intrusive means, including simple age “estimation.”
Age estimation can be done in a multitude of ways that are not invasive. In fact, businesses have been using age estimation for years – not to keep children safe – but rather for targeted marketing. The AADC will ensure that the age-estimation practices are the least invasive possible, will require that any personal information collected for the purposes of age estimation is not used for any other purpose, and, contrary to Goldman’s claim that age-authentication processes are generally privacy invasive, require that any collected information is deleted after its intended use.
Except, the bill doesn’t just call for “age estimation,” it requires “a reasonable level of certainty” which is not defined in the bill. And getting age estimation for targeted ads wrong means basically nothing to a company. They target an ad wrong, big deal. But under the AADC, a false estimation is now a legal liability. That, by itself, means that many sites will have strong incentives to move to true age verification, which is absolutely invasive.
And, also, not all sites engage in age estimation. Techdirt does not. I don’t want to know how old you are. I don’t care. But under this bill, I might need to.
Also, it’s absolutely hilarious that Farid, who has spent many years trashing all of these companies, insisting that they’re pure evil, that you should delete their apps, and insisting that they have “little incentive” to ever protect their users… thinks they can then be trusted to “delete” the age verification information after it’s been used for its “intended use.”
On that, he’s way more trusting of the tech companies than I would be.
Goldman also claims – without any substantiation – that these regulations will force online businesses to close their doors to children altogether. This argument is, at best, disingenuous, and at worst fear-mongering. The bill comes after negotiations with diverse stakeholders to ensure it is practically feasible and effective. None of the hundreds of California businesses engaged in negotiations are saying they fear having to close their doors. Where companies are not engaging in risky practices, the risks are minimal. The bill also includes a “right to cure” for businesses that are in substantial compliance with its provisions, therefore limiting liability for those seeking in good faith to protect children on their service.
I mean, a bunch of website owners I’ve spoken to over the last month has asked me about whether or not they should close off access to children altogether (or just close off access to Californians), so it’s hardly an idle thought.
Also, the idea that there were “negotiations with diverse stakeholders” appears to be bullshit. Again, I keep talking to website owners who were not contacted, and the few I’ve spoken to who have been in contact with legislators who worked on this bill have told me that the legislators told them, in effect, to pound sand when they pointed out the flaws in the bill.
I mean, Prof. Goldman pointed out tons of flaws in the bill, and it appears that the legislators made zero effort to fix them or to engage with him. No one in the California legislature spoke to me about my concerns either.
Exactly who are these “hundreds of California businesses engaged in negotiations”? I went through the list of organizations that officially supported the bill, and there are not “hundreds” there. I mean, there is the guy who spread COVID disinfo. Is that who Farid is talking about? Or the organizations pushing moral panics about the internet? There are the California privacy lawyers. But where are the hundreds of businesses who are happy with the law?
We should celebrate the fact that California is home to the giants of the technology sector. This success, however, also comes with the responsibility to ensure that California-based companies act as responsible global citizens. The arguments in favor of AADC are clear and uncontroversial: we have a responsibility to keep our youngest citizens safe. Hyperbolic and alarmist claims to the contrary are simply unfounded and unhelpful.
The only one who has made “hyperbolic and alarmist” claims here is the dude who insists that “there is no longer any question” that the internet harms children. The only one who has made “hyperbolic and alarmist” claims is the guy who tells his students that recommendations are so evil you should stop using apps. The only one who is “hyperbolic and alarmist” is the guy who insists the things that age verification providers told me directly are “bizarre an unsubstantiated.”
Farid may have built an amazing tool in PhotoDNA, but it hardly makes him an expert on the law, policy, how websites work, or social science about the supposed harms of the internet.