California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws
This post analyzes California AB 587, self-described as “Content Moderation Requirements for Internet Terms of Service.” I believe the bill will get a legislative hearing later this month.
A note about the draft I’m analyzing, posted here. It’s dated June 6, and it’s different from the version publicly posted on the legislature’s website (dated April 28). I’m not sure what the June 6 draft’s redlines compare to–maybe the bill as introduced? I’m also not sure if the June 6 draft will be the basis of the hearing, or if there will be more iterations between now and then. It’s exceptionally difficult for me to analyze bills that are changing rapidly in secret. When bill drafters secretly solicit feedback, every other constituency cannot follow along or share timely or helpful feedback. It’s especially ironic to see non-public activity for a bill that’s all about mandating transparency. ¯\_(ツ)_/¯
Who’s Covered by the Bill?
The bill applies to “social media platforms” that: “(A) Construct a public or semipublic profile within a bounded system created by the service. (B) Populate a list of other users with whom an individual shares a connection within the system. [and] (C) View and navigate a list of connections made by other individuals within the system.”
This definition of “social media” has been around for about a decade, and it’s awful. Critiques I made 8 years ago:
First, what is a “semi-public” profile, and how does it differ from a public or non-public profile? Is there even such a thing as a “semi-private” or “non-public” profile?…
Second, what does “a bounded system” mean?…The “bounded system” phrase sounds like a walled garden of some sort, but most walled gardens aren’t impervious. So what delimits the boundaries the statute refers to, and what does an “unbounded” system look like?
I also don’t understand what constitutes a “connection,” what a “list of connections” means, or what it means to “populate” the connection list. This definition of social media was never meant to be used as a statutory definition, and every word invites litigation.
Further, the legislature should–but surely has not–run this definition through a test suite to make sure it fits the legislature’s intent. In particular, which, if any, services offering user-generated content (UGC) functionality do NOT satisfy this definition? Though decades of litigation might ultimately answer the question, I expect that the language likely covers all UGC services.
[Note: based on a quick Lexis search, I saw similar statutory language in about 20 laws, but I did not see any caselaw interpreting the language because I believe those laws are largely unused.]
The bill then excludes some UGC services:
- Companies with less than $100M of gross revenue in the prior calendar year. There are many obvious problems with this standard, such as the fact that the revenue is enterprise-wide (so bigger businesses with small UGC components will be covered if they don’t turn off the UGC functionality), the lack of a phase-in period, the lack of a nexus for revenues derived from California, and the absence of why $100M was selected instead of $50M, $500M, or whatever. Every legislator really ought to read this article about how to draft size metrics for Internet services.
- Email service providers, “direct messaging” services, and “cloud storage or shared document or file collaboration.” All social media services are, in a sense, “cloud storage,” so what does this exclusion mean? ¯\_(ツ)_/¯
- “A section for user-generated comments on a digital news internet website that otherwise exclusively hosts content published by” entities enumerated in the California Constitution, Article I(2)(b). Entities referenced in the Constitution: a “publisher, editor, reporter, or other person connected with or employed upon a newspaper, magazine, or other periodical publication, or by a press association or wire service” and “a radio or television news reporter or other person connected with or employed by a radio or television station.” I don’t know that any service can take advantage of this exclusion because every traditional publisher publishes content from freelancers and other non-employees, so the “exclusively hosts” requirement creates a null set. Also, this exclusion opts-into the confusion about the statutory differences between traditional and new media. See some cases discussing that issue.
- “Consumer reviews of products or services on an internet website that serves the exclusive purpose of facilitating online commerce.” Ha ha. Should we call this the “Amazon exclusion”? If so, I’m not sure they are getting their money’s worth. Does Amazon.com EXCLUSIVELY facilitate online commerce? 🤔 And if this exclusion doesn’t benefit Yelp and TripAdvisor–because they have reviews on things that don’t support e-commerce (like free-to-visit parks)–I can’t wait to see how the state explains why non-commercial consumer reviews need transparency while commercial ones do not.
- “An internet-based subscription streaming service that is offered to consumers for the exclusive purpose of transmitting licensed media, including audio or video files, in a continuous flow from the internet-based service to the end user, and does not host user-generated content.” Should we call this the “Netflix exclusion”? I’d be grateful if someone could explain to me the differences between “licensed media” and “UGC.” 🤔
The Law’s Requirements
Publish the “TOS”
The bill requires social media platforms to post their terms of service (TOS), translated into every language they offer product features in. It defines “TOS” as:
a policy or set of policies adopted by a social media company that specifies, at least, the user behavior and activities that are permitted on the internet-based service owned or operated by the social media company, and the user behavior and activities that may subject the user or an item of content to being actioned. This may include, but is not limited to, a terms of service document or agreement, rules or content moderation guidelines, community guidelines, acceptable uses, and other policies and established practices that outline these policies.
To start, I need to address the ambiguity of what constitutes the “TOS” because it’s the most dangerous and censorial trap of the bill. Every service publishes public-facing “editorial rules,” but the published versions never can capture ALL of the service’s editorial rules. Exceptions include: private interpretations that are not shared to protect against gaming, private interpretations that are too detailed for public consumption, private interpretations that governments ask/demand the services don’t tell the public about, private interpretations that are made on the fly in response to exigencies, one-off exceptions, and more.
According to the bill’s definition, failing to publish all of these non-public “policies and practices” before taking action based on them could mean noncompliance with the bill’s requirements. Given the inevitability of such undisclosed editorial policies, it seems like every service always will be noncompliant.
Furthermore, to the extent the bill inhibits services from making an editorial decision using a policy/practice that hasn’t been pre-announced, the bill would control and skew the services’ editorial decisions. This pre-announcement requirement would have the same effect as Florida’s restrictions on updating their TOSes more than once every 30 days (the 11th Circuit held that restriction was unconstitutional).
Finally, imagine trying to impose a similar editorial policy disclosure requirement on a traditional publisher like a newspaper or book publisher. They currently aren’t required to disclose ANY editorial policies, let alone ALL of them, and I believe any such effort to require such disclosures would obviously be struck down as an unconstitutional intrusion into the freedom of speech and press.
In addition to requiring the TOS’s publication, the bill says the TOS must include (1) a way to contact the platform to ask questions about the TOS, (2) descriptions of how users can complain about content and “the social media company’s commitments on response and resolution time.” (Drafting suggestion for regulated services: “We do not promise to respond ever”), and (3) “A list of potential actions the social media company may take against an item of content or a user, including, but not limited to, removal, demonetization, deprioritization, or banning.” I identified 3 dozen potential actions in my Content Moderation Remedies article, and I’m sure more exist or will be developed, so the remedies list should be long and I’m not sure how a platform could pre-announce the full universe of possible remedies.
Information Disclosures to the CA AG
Once a quarter, the bill would require platforms to deliver to the CA AG the current TOS, a “complete and detailed description” of changes to the TOS in the prior quarter, and a statement of whether the TOS defines any of the following five terms and what the definitions are: “Hate speech or racism,” “Extremism or radicalization,” “Disinformation or misinformation,” “Harassment,” and “Foreign political interference.” [If the definitions are from the TOS, can’t the AG just read that?]. I’ll call the enumerated five content categories the “Targeted Constitutionally Protected Content.”
In addition, the platforms would need to provide a “detailed description of content moderation practices used by the social media.” This seems to contemplate more disclosures than just the “TOS,” but that definition seemingly already captured all of the service’s content moderation rules. I assume the bill wants to know how the service’s editorial policies are operationalized, but it doesn’t make that clear. Plus, like Texas’ open-ended disclosure requirements, the unbounded disclosure obligation ensures litigation over (unavoidable) omissions.
Beyond the open-ended requirement, the bill enumerates an overwhelmingly complex list of required disclosures, which are far more invasive and burdensome than Texas’ plenty-burdensome demands:
- “Any existing policies intended to address” the Targeted Constitutionally Protected Content. Wasn’t this already addressed in the “TOS” definition?
- “How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review.” As discussed more below, this is a fine example of a disclosure where any investigation into its accuracy would be overly invasive.
- “How the social media company responds to user reports of violations of the terms of service.” Does this mean respond to the user or respond to notices through internal processes? At large services, the latter involves a complicated and constantly changing flowchart with lots of exceptions, so this would become another disclosure trap.
- “How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service.” What does “broader action” mean? Does that refer to account-level interventions instead of item-level interventions? As my Content Moderation Remedies paper showed, this topic is way more complicated than a binary remove/leave up dichotomy.
- “The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts.” Given the earlier requirement to translate the TOS into these languages, this disclosure would be an admission of legal violations, no?
- With respect to the Targeted Constitutionally Protected Content, the following data:
- “The total number of flagged items of content.”
- Number of items “actioned.”
- “The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.” I assume this means account-level actions based on the Targeted Constitutionally Protected Content?
- Number of items “removed, demonetized, or deprioritized.” Is this just a subset of the number reported in the second bullet above?
- “The number of times actioned items of content were viewed by users.”
- “The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.” How is the second half of this requirement different from the prior bullet?
- “The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.”
- All of the data disclosed in response to the prior bullet points must be broken down further by:
- Each of the five categories of the Targeted Constitutionally Protected Content.
- The type of content (posts vs. profile pages, etc.)
- The type of media (video vs. text, etc.)
- How the items were flagged (employees/contractors, “AI software,” “community moderators,” “civil society partners” and “users”–third party non-users aren’t enumerated but they are another obvious source of “flags”)
- “How the content was actioned” (same list of entities as the prior bullet)
All told, there are 7 categories of disclosures, and the bill indicates that the disclosure categories have, respectively, 5 options, at least 5 options, at least 3 options, at least 5 options, and at least 5 options. So I believe the bill requires that each service’s reports should include no less than 161 different categories of disclosures (7×5+7×5+7×3+7×5+7×5).
Who will benefit from these disclosures? At minimum, unlike the purported justification cited by the 11th Circuit for Florida’s disclosure requirements, the bill’s required statistics cannot help consumers make better marketplace choices. By definition, each service can define each category of Targeted Constitutionally Protected Content differently, so consumers cannot compare the reported numbers across services. Furthermore, because services can change how these define each content category from time to time, it won’t even be possible to compare a service’s new numbers against prior numbers to determine if they are getting “better” or “worse” at managing the Targeted Constitutionally Protected Content. Services could even change their definitions so they don’t have to report anything. For example, a service could create an omnibus category of “incivil content/activity” that includes some or all of the Targeted Constitutionally Protected Content categories, in which case they wouldn’t have to disclose anything. (Note also that this countermove would represent a change in the service’s editorial practices impelled by the bill, which exacerbates the constitutional problem discussed below). So who is the audience for the statistics and what, exactly, will they learn from the required disclosures? Without clear and persuasive answers to these questions, it looks like the state is demanding the info purely as a raw exercise of power, not to benefit any constituency.
Remedies
Violations can trigger penalties of up to $15k/violation/day, and the penalties should at minimum be “sufficient to induce compliance with this act” but should be mitigated if the service “made a reasonable, good faith attempt to comply.” The AG can enforce the law, but so can county counsel and city DAs in some circumstances. The bill provides those non-AG enforcers with some financial incentives to chase the penalty money as a bounty.
An earlier draft of the bill expressly authorized private rights of action via B&P 17200. Fortunately, that provision got struck…but, unfortunately, in its place there’s a provision saying that this bill is cumulative with any other law. As a result, I think the 17200 PRA is still available. If so, this bill will be a perpetual litigation machine. I would expect every lawsuit against a regulated service would add 587 claims for alleged omissions, misrepresentations, etc. Like the CCPA/CPRA, the bill should clearly eliminate all PRAs–unless the legislature wants Californians suing each other into oblivion.
Some Structural Problems with the Bill
Although the prior section identified some obvious drafting errors, fixing those errors won’t make this a good bill. Some structural problems with the bill that can’t be readily fixed.
The overall problem with mandatory editorial transparency. I just wrote a whole paper explaining why mandatory editorial transparency laws like AB 587 are categorically unconstitutional, so you should start with that if you haven’t already read it. To summarize, the disclosure requirements about editorial policies and practices functionally control speech by inducing publishers to make editorial decisions that will placate regulators rather than best serve the publisher’s audience. Furthermore, any investigation of the mandated disclosures puts the government in the position of supervising the editorial process, an “unhealthy entanglement.” I already mentioned one such example where regulators try to validate if the service properly described when it does manual vs. automated content moderation. Such an investigation would necessarily scrutinize and second-guess every aspect of the service’s editorial function.
Because of these inevitable speech restrictions, I believe strict scrutiny should apply to AB 587 without relying on the confused caselaw involving compelled commercial disclosures. In other words, I don’t think Zauderer–a recent darling of the pro-censorship crowd–is the right test (I will have more to say on this topic). Further, Zauderer only applies when the disclosures are “uncontroversial” and “purely factual,” but the AB587 disclosures are neither. The Targeted Constitutionally Protect Content categories all involve highly political topics, not the pricing terms at issue in Zauderer; and the disclosures require substantial and highly debatable exercises of judgments to make the classifications, so they are not “purely factual.” And even if Zauderer does apply, I think the disclosure requirements impose an undue burden. For example, if 161 different prophylactic “just-in-case” disclosures don’t constitute an undue burden, I don’t know what would.
The TOS definition problem. As I mentioned, what constitutes part of the “TOS” creates a litigation trap easily exploited by plaintiffs. Furthermore, if it requires the publication of policies and practices that justifiably should not be published, the law intrudes into editorial processes.
The favoritism shown to the Targeted Constitutionally Protected Content. The law “privileges” the five categories in the Targeted Constitutionally Protected Content for heightened attention by services, but there are many other categories of lawful-but-awful content that are not given equal treatment. Why?
This distinction between types of lawful-but-awful speech sends the obvious message to services that they need to pay closer attention to these content categories over the others. This implicit message to reprioritize content categories distorts the services’ editorial prerogative, and if services get the message that they should manage the disclosed numbers down, the bill reduces constitutionally protected speech. However, services won’t know if they should be managing the numbers down. The AG is a Democrat, so he’s likely to prefer less lawful-but-awful content. However, many county prosecutors in red counties (yes, California has them) may prefer less content moderation of constitutionally protected speech and would investigate if they see the numbers trending down. Given that services are trapped between these competing partisan dynamics, they will be paralyzed in their editorial decision-making. This reiterates why the bill doesn’t satisfy Zauderer “uncontroversial” prong.
The problem classifying the Targeted Constitutionally Protected Content. Determining what fits into each category of the Targeted Constitutionally Protected Content is an editorial judgment that always will be subject to substantial debate. Consider, for example, how often the Oversight Board has reversed Facebook on similar topics. The plaintiffs can always disagree with the service’s classifications, and that puts them in the role of second-guessing the service’s editorial decisions.
Social media exceptionalism. As Benkler et al’s book Network Propaganda showed, Fox News injects misinformation into the conversation, which then propagates to social media. So why does the bill target social media and not Fox News? More generally, the bill doesn’t explain why social media needs this intervention compared to traditional publishers or even other types of online publishers (say, Breitbart?). Or is the state’s position that it could impose equally invasive transparency obligations on the editorial decisions of other publishers, like newspapers and book publishers?
The favoritism shown to the excluded services. I think the state will have a difficult time justifying why some UGC services get a free pass from the requirements. It sure looks arbitrary.
The Dormant Commerce Clause. The bill does not restrict its reach to California. This creates several potential DCC problems:
- The bill reaches extraterritorially.
- It requires disclosures involving activity outside of California, including countries where the Targeted Constitutionally Protected Content is illegal. This makes it impossible to properly contextualize the numbers because the legislative restrictions may vary by country. It also leaves the services vulnerable to enforcement actions that their numbers are too high/low based on dynamics the services cannot control.
- If the bill reaches services not located in California, then it is regulating activity between a non-California service and non-California residents.
- The bill sets up potential conflicts with other states’ laws. For example, a recent NY law defines “hateful conduct” and provides specific requirements for dealing with it. This may or may not coincide with California’s requirements.
- The cumulative effect of different states’ disclosure requirements will surely become overly burdensome. For example, Texas’ disclosure requirements are structured differently than California’s. A service would have to build different reporting schemes to comply with the different laws. Multiply this times many other states, and the reporting burden becomes overwhelming.
Conclusion
Stepping back from the details, the bill can be roughly divided into two components: (1) the TOS publication and delivery component, and (2) the operational disclosures and statistics component. Abstracting the bill at this level highlights the bill’s pure cynicism.
The TOS publication and delivery component is obviously pointless. Any regulated platform already posts its TOS and likely addresses the specified topics, at least in some level of generality (and an obvious countermove to this bill will be for services to make their public-facing disclosures more general and less specific than they currently are). Consumers can already read those onsite TOSes if they care; and the AG’s office can already access those TOSes any time it wants. (Heck, the AG can even set up bots to download copies quarterly, or even more frequently, and I wonder if the AG’s office has ever used the Wayback Machine?). So if this provision isn’t really generating any new disclosures to consumers, it’s just creating technical traps that platforms might trip over.
The operational disclosures and statistics component would likely create new public data, but as explained above, it’s data that is worthless to consumers. Like the TOS publication and delivery provision, it feels more like a trap for technical enforcements than a provision that benefits California residents. It’s also almost certainly unconstitutional. The emphasis on Targeted Constitutionally Protected Content categories seems designed to change the editorial decision-making of the regulated services, which is a flat-out form of censorship; and even if Zauderer is the applicable test, it seems likely to fail that test as well.
So if this provision gets struck and the TOS publication and delivery provision doesn’t do anything helpful, it leaves the obvious question: why is the California legislature working on this and not the many other social problems in our state? The answer to that question is surely dispiriting to every California resident.
Reposted, with permission, from Eric Goldman’s Technology & Marketing Law Blog.
Another Thought on Fair Use
Setting up a paywall around the most viral third-party content might hinder Twitter's fair use and Section 512 defenses if Twitter ever gets sued for copyright infringement for those tweets. But I'm sure Musk has thought all of this through
With Friends Like This...
This is another example of "the enemy of my enemy is my friend" principle. But now that it's clear that Sen. Hawley foments sedition, it would be prudent for Hawley's former "friends" to question every point of agreement with him. Instead, some Democrats will keep pursuing the same bad ideas that Hawley supported and still supports--rationalizing to themselves that by cutting Hawley out of their game, NOW they are on the path of righteousness.
Wil Congress ever learn?
Five years ago, Congress enacted the BOTS Act to target event ticket sniping. The FTC has brought a grand total of 1 BOTS Act enforcement action since then, yet I doubt anyone feels like event ticket sniping is fixed. So why would Congress think an anti-toy sniping law will fare any better?
Nice And Funny Too
Nice one
"While some companies view applicants' social media posts when considering them for employment, very few are demanding social media account information as part of the application process"
Just to clarify, more than 2 dozen states have enacted laws banning employers from demanding that employees or prospective employees give them the login credentials to their social media. http://www.ncsl.org/research/telecommunications-and-information-technology/state-laws-prohibiting-access-to-social-media-usernames-and-passwords.aspx There is also a proposed uniform law to that effect. http://www.uniformlaws.org/Act.aspx?title=Employee%20and%20Student%20Online%20Privacy%20Protection%20Act Eric.
UPS Settlement
As an earlier commenter mentioned, UPS paid the DOJ $40M to settle virtually identical charges. http://www.reuters.com/article/net-us-ups-pharmacies-settlement-idUSBRE92S0DX20130329 So even "novel" theories can generate tens of millions of dollars of ill-gotten gains when pushed by the DOJ and the resources of the U.S. government.
True, but...
Unfortunately, even if this ruling didn't do further violence to the DMCA, a different GrooveShark ruling did undermine the DMCA substantially. http://www.forbes.com/sites/ericgoldman/2013/04/24/more-evidence-that-congress-misaligned-its-online-copyright-safe-harbors-umg-v-grooveshark/
On the plus side...
...now we can legitimately tell the people of Afghanistan that the US government trusts its own citizens less than it trusts them. Eric.
YOLO = You Only Lose Once?
Anyone willing to lay odds that both Berman and Mack will land softly into a position funded by Hollywood, one way or another? Opening line is 100% chance.
Of course, unblocking the sites would only solve part of the problem. There may not be much interest among USPTO employees in reading websites so antithetical to their existing views. Eric.
The government will almost certainly abandon any case they are going to lose. That way, they will avoid accountability indefinitely. Meanwhile, the government will keep grabbing new domain names using the same BS theories. Eric.
I did a similar thought experiment regarding search engines: http://blog.ericgoldman.org/archives/2011/06/a_thought_exper.htm
The idea that a government-run website would improve a website's privacy and trustworthiness is beyond laughable. It completely ignores the massive abuses of our trust and privacy our government commits every day.
Eric.
OK, but Facebook is notorious about overly quick to block legitimate URLs as spammy. http://blog.ericgoldman.org/personal/archives/2010/12/distrust_in_the.html Maybe they need a more systemic fix to their manual spam-blocking procedures.
But did he get paid the minimum guild scale?
I've argued that these situations would be better handled through restorative justice. http://blog.ericgoldman.org/archives/2011/05/cyberbullying_a.htm Eric.
Of course, Alfred Perry is the same guy who reached out to law schools because "we still have much to learn." http://arstechnica.com/tech-policy/news/2012/02/paramount-humbled-by-sopa-protests-even-as-ceo-blasts-mob-mentality.ars Sounds like his hoped-for academic exchange sisn't exactly improving the discourse. Eric.
I am a bit skeptical that any event organized along these lines would actually advance audience understanding very much. When I've organized or attended panels that featured both Hollywood insiders and vigorous opponents, my experience has been that the conversation tends to gravitate towards one or two irresolute debates:
1) High-level philosophical debates, like "Speaker 1: Piracy is theft! Speaker 2: No, piracy is a form of marketing!"
2) Bogus statistics debates, like "Speaker 1: File sharing reduces music album sales. Speaker 2: No, album sales declined due to unbundling."
I haven't found these so-called discourses very illuminating. My observation is that usually this back-and-forth results only in reinforcing the audiences' pre-existing beliefs as the audience gives in to their confirmation biases.
I think more productive discussions could be had by developing a clear and concise statement of "the problem." In my opinion, the SOPA/PIPA advocates never gave us clean statements of "the problem" because they figured they could slam home their overzealous proposals without much defense of their merits. Advocates vaguely alluded to problems like "foreign rogue websites" (although not much empirical proof that foreign rogue websites were costing rightsowners money, combined with the statutory drafting defect that it's difficult to separate the foreign website goats from the domestic website sheep) or "piracy costs American jobs" (which at best really means that only certain industries are losing jobs, and even that's debatable). Perhaps there is value to digging into these problems in more abstract terms--if we're concerned about American jobs, under what circumstances does copyright or trademark infringement cost net jobs, and how best to remediate that; or if we're concerned about the difficulty reaching offshore actors, should we instantiate geographic borders into a borderless electronic network, and if so, what tools are best to do so. When the problems are reframed that way, the proffered speakers from Paramount may not have the requisite expertise; or at best their perspective would be only one of several that would be valuable to the discussion.
More generally, one recurring problem I see with lunchtime events in law schools--especially with student-organized events--is when in-house counsel give a talk and there's no counterbalance. By definition, in-house counsel will espouse the corporate line (and indeed, they usually have professional responsibility duties against public statements against their client's interests), so these speakers are almost never "neutral" or "balanced." As a result, it doesn't matter if the in-house counsel are IP maximalists or minimalists; if they are presented on a standalone basis, then the audience is almost certainly getting only one side of the story, unrebutted--usually to the audience's detriment.
But as Hollywood routinely insists, they only go after "the worst of the worst."
This is indeed good news and a success story for Internet advocacy, but the battles are hardly over. The pro-statute forces remain powerful and determined, and the actual rulings being issued by courts today without any statutory changes are very, very troubling. Eric.