California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws

from the who-does-this-help? dept

This post analyzes California AB 587, self-described as “Content Moderation Requirements for Internet Terms of Service.” I believe the bill will get a legislative hearing later this month.

A note about the draft I’m analyzing, posted here. It’s dated June 6, and it’s different from the version publicly posted on the legislature’s website (dated April 28). I’m not sure what the June 6 draft’s redlines compare to–maybe the bill as introduced? I’m also not sure if the June 6 draft will be the basis of the hearing, or if there will be more iterations between now and then. It’s exceptionally difficult for me to analyze bills that are changing rapidly in secret. When bill drafters secretly solicit feedback, every other constituency cannot follow along or share timely or helpful feedback. It’s especially ironic to see non-public activity for a bill that’s all about mandating transparency. ¯\_(ツ)_/¯

Who’s Covered by the Bill?

The bill applies to “social media platforms” that: “(A) Construct a public or semipublic profile within a bounded system created by the service. (B) Populate a list of other users with whom an individual shares a connection within the system. [and] (C) View and navigate a list of connections made by other individuals within the system.”

This definition of “social media” has been around for about a decade, and it’s awful. Critiques I made 8 years ago:

First, what is a “semi-public” profile, and how does it differ from a public or non-public profile? Is there even such a thing as a “semi-private” or “non-public” profile?…

Second, what does “a bounded system” mean?…The “bounded system” phrase sounds like a walled garden of some sort, but most walled gardens aren’t impervious. So what delimits the boundaries the statute refers to, and what does an “unbounded” system look like?

I also don’t understand what constitutes a “connection,” what a “list of connections” means, or what it means to “populate” the connection list. This definition of social media was never meant to be used as a statutory definition, and every word invites litigation.

Further, the legislature should–but surely has not–run this definition through a test suite to make sure it fits the legislature’s intent. In particular, which, if any, services offering user-generated content (UGC) functionality do NOT satisfy this definition? Though decades of litigation might ultimately answer the question, I expect that the language likely covers all UGC services.

[Note: based on a quick Lexis search, I saw similar statutory language in about 20 laws, but I did not see any caselaw interpreting the language because I believe those laws are largely unused.]

The bill then excludes some UGC services:

  • Companies with less than $100M of gross revenue in the prior calendar year. There are many obvious problems with this standard, such as the fact that the revenue is enterprise-wide (so bigger businesses with small UGC components will be covered if they don’t turn off the UGC functionality), the lack of a phase-in period, the lack of a nexus for revenues derived from California, and the absence of why $100M was selected instead of $50M, $500M, or whatever. Every legislator really ought to read this article about how to draft size metrics for Internet services.
  • Email service providers, “direct messaging” services, and “cloud storage or shared document or file collaboration.” All social media services are, in a sense, “cloud storage,” so what does this exclusion mean? ¯\_(ツ)_/¯
  • “A section for user-generated comments on a digital news internet website that otherwise exclusively hosts content published by” entities enumerated in the California Constitution, Article I(2)(b). Entities referenced in the Constitution: a “publisher, editor, reporter, or other person connected with or employed upon a newspaper, magazine, or other periodical publication, or by a press association or wire service” and “a radio or television news reporter or other person connected with or employed by a radio or television station.” I don’t know that any service can take advantage of this exclusion because every traditional publisher publishes content from freelancers and other non-employees, so the “exclusively hosts” requirement creates a null set. Also, this exclusion opts-into the confusion about the statutory differences between traditional and new media. See some cases discussing that issue.
  • “Consumer reviews of products or services on an internet website that serves the exclusive purpose of facilitating online commerce.” Ha ha. Should we call this the “Amazon exclusion”? If so, I’m not sure they are getting their money’s worth. Does Amazon.com EXCLUSIVELY facilitate online commerce? 🤔  And if this exclusion doesn’t benefit Yelp and TripAdvisor–because they have reviews on things that don’t support e-commerce (like free-to-visit parks)–I can’t wait to see how the state explains why non-commercial consumer reviews need transparency while commercial ones do not.
  • “An internet-based subscription streaming service that is offered to consumers for the exclusive purpose of transmitting licensed media, including audio or video files, in a continuous flow from the internet-based service to the end user, and does not host user-generated content.” Should we call this the “Netflix exclusion”? I’d be grateful if someone could explain to me the differences between “licensed media” and “UGC.” 🤔

The Law’s Requirements

Publish the “TOS”

The bill requires social media platforms to post their terms of service (TOS), translated into every language they offer product features in. It defines “TOS” as:

a policy or set of policies adopted by a social media company that specifies, at least, the user behavior and activities that are permitted on the internet-based service owned or operated by the social media company, and the user behavior and activities that may subject the user or an item of content to being actioned. This may include, but is not limited to, a terms of service document or agreement, rules or content moderation guidelines, community guidelines, acceptable uses, and other policies and established practices that outline these policies.

To start, I need to address the ambiguity of what constitutes the “TOS” because it’s the most dangerous and censorial trap of the bill. Every service publishes public-facing “editorial rules,” but the published versions never can capture ALL of the service’s editorial rules. Exceptions include: private interpretations that are not shared to protect against gaming, private interpretations that are too detailed for public consumption, private interpretations that governments ask/demand the services don’t tell the public about, private interpretations that are made on the fly in response to exigencies, one-off exceptions, and more.

According to the bill’s definition, failing to publish all of these non-public “policies and practices” before taking action based on them could mean noncompliance with the bill’s requirements. Given the inevitability of such undisclosed editorial policies, it seems like every service always will be noncompliant.

Furthermore, to the extent the bill inhibits services from making an editorial decision using a policy/practice that hasn’t been pre-announced, the bill would control and skew the services’ editorial decisions. This pre-announcement requirement would have the same effect as Florida’s restrictions on updating their TOSes more than once every 30 days (the 11th Circuit held that restriction was unconstitutional).

Finally, imagine trying to impose a similar editorial policy disclosure requirement on a traditional publisher like a newspaper or book publisher. They currently aren’t required to disclose ANY editorial policies, let alone ALL of them, and I believe any such effort to require such disclosures would obviously be struck down as an unconstitutional intrusion into the freedom of speech and press.

In addition to requiring the TOS’s publication, the bill says the TOS must include (1) a way to contact the platform to ask questions about the TOS, (2) descriptions of how users can complain about content and “the social media company’s commitments on response and resolution time.” (Drafting suggestion for regulated services: “We do not promise to respond ever”), and (3) “A list of potential actions the social media company may take against an item of content or a user, including, but not limited to, removal, demonetization, deprioritization, or banning.” I identified 3 dozen potential actions in my Content Moderation Remedies article, and I’m sure more exist or will be developed, so the remedies list should be long and I’m not sure how a platform could pre-announce the full universe of possible remedies.

Information Disclosures to the CA AG

Once a quarter, the bill would require platforms to deliver to the CA AG the current TOS, a “complete and detailed description” of changes to the TOS in the prior quarter, and a statement of whether the TOS defines any of the following five terms and what the definitions are: “Hate speech or racism,” “Extremism or radicalization,” “Disinformation or misinformation,” “Harassment,” and “Foreign political interference.” [If the definitions are from the TOS, can’t the AG just read that?]. I’ll call the enumerated five content categories the “Targeted Constitutionally Protected Content.”

In addition, the platforms would need to provide a “detailed description of content moderation practices used by the social media.” This seems to contemplate more disclosures than just the “TOS,” but that definition seemingly already captured all of the service’s content moderation rules. I assume the bill wants to know how the service’s editorial policies are operationalized, but it doesn’t make that clear. Plus, like Texas’ open-ended disclosure requirements,  the unbounded disclosure obligation ensures litigation over (unavoidable) omissions.

Beyond the open-ended requirement, the bill enumerates an overwhelmingly complex list of required disclosures, which are far more invasive and burdensome than Texas’ plenty-burdensome demands:

  • “Any existing policies intended to address” the Targeted Constitutionally Protected Content. Wasn’t this already addressed in the “TOS” definition?
  • “How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review.” As discussed more below, this is a fine example of a disclosure where any investigation into its accuracy would be overly invasive.
  • “How the social media company responds to user reports of violations of the terms of service.” Does this mean respond to the user or respond to notices through internal processes? At large services, the latter involves a complicated and constantly changing flowchart with lots of exceptions, so this would become another disclosure trap.
  • “How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service.” What does “broader action” mean? Does that refer to account-level interventions instead of item-level interventions? As my Content Moderation Remedies paper showed, this topic is way more complicated than a binary remove/leave up dichotomy.
  • “The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts.” Given the earlier requirement to translate the TOS into these languages, this disclosure would be an admission of legal violations, no?
  • With respect to the Targeted Constitutionally Protected Content, the following data:
    • “The total number of flagged items of content.”
    • Number of items “actioned.”
    • “The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.” I assume this means account-level actions based on the Targeted Constitutionally Protected Content?
    • Number of items “removed, demonetized, or deprioritized.” Is this just a subset of the number reported in the second bullet above?
    • “The number of times actioned items of content were viewed by users.”
    • “The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.” How is the second half of this requirement different from the prior bullet?
    • “The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.”
    • All of the data disclosed in response to the prior bullet points must be broken down further by:
      • Each of the five categories of the Targeted Constitutionally Protected Content.
      • The type of content (posts vs. profile pages, etc.)
      • The type of media (video vs. text, etc.)
      • How the items were flagged (employees/contractors, “AI software,” “community moderators,” “civil society partners” and “users”–third party non-users aren’t enumerated but they are another obvious source of “flags”)
      • “How the content was actioned” (same list of entities as the prior bullet)

All told, there are 7 categories of disclosures, and the bill indicates that the disclosure categories have, respectively, 5 options, at least 5 options, at least 3 options, at least 5 options, and at least 5 options. So I believe the bill requires that each service’s reports should include no less than 161 different categories of disclosures (7×5+7×5+7×3+7×5+7×5).

Who will benefit from these disclosures? At minimum, unlike the purported justification cited by the 11th Circuit for Florida’s disclosure requirements, the bill’s required statistics cannot help consumers make better marketplace choices. By definition, each service can define each category of Targeted Constitutionally Protected Content differently, so consumers cannot compare the reported numbers across services. Furthermore, because services can change how these define each content category from time to time, it won’t even be possible to compare a service’s new numbers against prior numbers to determine if they are getting “better” or “worse” at managing the Targeted Constitutionally Protected Content. Services could even change their definitions so they don’t have to report anything. For example, a service could create an omnibus category of “incivil content/activity” that includes some or all of the Targeted Constitutionally Protected Content categories, in which case they wouldn’t have to disclose anything. (Note also that this countermove would represent a change in the service’s editorial practices impelled by the bill, which exacerbates the constitutional problem discussed below). So who is the audience for the statistics and what, exactly, will they learn from the required disclosures? Without clear and persuasive answers to these questions, it looks like the state is demanding the info purely as a raw exercise of power, not to benefit any constituency.

Remedies

Violations can trigger penalties of up to $15k/violation/day, and the penalties should at minimum be “sufficient to induce compliance with this act” but should be mitigated if the service “made a reasonable, good faith attempt to comply.” The AG can enforce the law, but so can county counsel and city DAs in some circumstances. The bill provides those non-AG enforcers with some financial incentives to chase the penalty money as a bounty.

An earlier draft of the bill expressly authorized private rights of action via B&P 17200. Fortunately, that provision got struck…but, unfortunately, in its place there’s a provision saying that this bill is cumulative with any other law. As a result, I think the 17200 PRA is still available. If so, this bill will be a perpetual litigation machine. I would expect every lawsuit against a regulated service would add 587 claims for alleged omissions, misrepresentations, etc. Like the CCPA/CPRA, the bill should clearly eliminate all PRAs–unless the legislature wants Californians suing each other into oblivion.

Some Structural Problems with the Bill

Although the prior section identified some obvious drafting errors, fixing those errors won’t make this a good bill. Some structural problems with the bill that can’t be readily fixed.

The overall problem with mandatory editorial transparency. I just wrote a whole paper explaining why mandatory editorial transparency laws like AB 587 are categorically unconstitutional, so you should start with that if you haven’t already read it. To summarize, the disclosure requirements about editorial policies and practices functionally control speech by inducing publishers to make editorial decisions that will placate regulators rather than best serve the publisher’s audience. Furthermore, any investigation of the mandated disclosures puts the government in the position of supervising the editorial process, an “unhealthy entanglement.” I already mentioned one such example where regulators try to validate if the service properly described when it does manual vs. automated content moderation. Such an investigation would necessarily scrutinize and second-guess every aspect of the service’s editorial function.

Because of these inevitable speech restrictions, I believe strict scrutiny should apply to AB 587 without relying on the confused caselaw involving compelled commercial disclosures. In other words, I don’t think Zauderer–a recent darling of the pro-censorship crowd–is the right test (I will have more to say on this topic). Further, Zauderer only applies when the disclosures are “uncontroversial” and “purely factual,” but the AB587 disclosures are neither. The Targeted Constitutionally Protect Content categories all involve highly political topics, not the pricing terms at issue in Zauderer; and the disclosures require substantial and highly debatable exercises of judgments to make the classifications, so they are not “purely factual.” And even if Zauderer does apply, I think the disclosure requirements impose an undue burden. For example, if 161 different prophylactic “just-in-case” disclosures don’t constitute an undue burden, I don’t know what would.

The TOS definition problem. As I mentioned, what constitutes part of the “TOS” creates a litigation trap easily exploited by plaintiffs. Furthermore, if it requires the publication of policies and practices that justifiably should not be published, the law intrudes into editorial processes.

The favoritism shown to the Targeted Constitutionally Protected Content. The law “privileges” the five categories in the Targeted Constitutionally Protected Content for heightened attention by services, but there are many other categories of lawful-but-awful content that are not given equal treatment. Why?

This distinction between types of lawful-but-awful speech sends the obvious message to services that they need to pay closer attention to these content categories over the others. This implicit message to reprioritize content categories distorts the services’ editorial prerogative, and if services get the message that they should manage the disclosed numbers down, the bill reduces constitutionally protected speech. However, services won’t know if they should be managing the numbers down. The AG is a Democrat, so he’s likely to prefer less lawful-but-awful content. However, many county prosecutors in red counties (yes, California has them) may prefer less content moderation of constitutionally protected speech and would investigate if they see the numbers trending down. Given that services are trapped between these competing partisan dynamics, they will be paralyzed in their editorial decision-making. This reiterates why the bill doesn’t satisfy Zauderer “uncontroversial” prong.

The problem classifying the Targeted Constitutionally Protected Content. Determining what fits into each category of the Targeted Constitutionally Protected Content is an editorial judgment that always will be subject to substantial debate. Consider, for example, how often the Oversight Board has reversed Facebook on similar topics. The plaintiffs can always disagree with the service’s classifications, and that puts them in the role of second-guessing the service’s editorial decisions.

Social media exceptionalism. As Benkler et al’s book Network Propaganda showed, Fox News injects misinformation into the conversation, which then propagates to social media. So why does the bill target social media and not Fox News? More generally, the bill doesn’t explain why social media needs this intervention compared to traditional publishers or even other types of online publishers (say, Breitbart?). Or is the state’s position that it could impose equally invasive transparency obligations on the editorial decisions of other publishers, like newspapers and book publishers?

The favoritism shown to the excluded services. I think the state will have a difficult time justifying why some UGC services get a free pass from the requirements. It sure looks arbitrary.

The Dormant Commerce Clause. The bill does not restrict its reach to California. This creates several potential DCC problems:

  • The bill reaches extraterritorially.
    • It requires disclosures involving activity outside of California, including countries where the Targeted Constitutionally Protected Content is illegal. This makes it impossible to properly contextualize the numbers because the legislative restrictions may vary by country. It also leaves the services vulnerable to enforcement actions that their numbers are too high/low based on dynamics the services cannot control.
    • If the bill reaches services not located in California, then it is regulating activity between a non-California service and non-California residents.
  • The bill sets up potential conflicts with other states’ laws. For example, a recent NY law defines “hateful conduct” and provides specific requirements for dealing with it. This may or may not coincide with California’s requirements.
  • The cumulative effect of different states’ disclosure requirements will surely become overly burdensome. For example, Texas’ disclosure requirements are structured differently than California’s. A service would have to build different reporting schemes to comply with the different laws. Multiply this times many other states, and the reporting burden becomes overwhelming.

Conclusion

Stepping back from the details, the bill can be roughly divided into two components: (1) the TOS publication and delivery component, and (2) the operational disclosures and statistics component. Abstracting the bill at this level highlights the bill’s pure cynicism.

The TOS publication and delivery component is obviously pointless. Any regulated platform already posts its TOS and likely addresses the specified topics, at least in some level of generality (and an obvious countermove to this bill will be for services to make their public-facing disclosures more general and less specific than they currently are). Consumers can already read those onsite TOSes if they care; and the AG’s office can already access those TOSes any time it wants. (Heck, the AG can even set up bots to download copies quarterly, or even more frequently, and I wonder if the AG’s office has ever used the Wayback Machine?). So if this provision isn’t really generating any new disclosures to consumers, it’s just creating technical traps that platforms might trip over.

The operational disclosures and statistics component would likely create new public data, but as explained above, it’s data that is worthless to consumers. Like the TOS publication and delivery provision, it feels more like a trap for technical enforcements than a provision that benefits California residents. It’s also almost certainly unconstitutional. The emphasis on Targeted Constitutionally Protected Content categories seems designed to change the editorial decision-making of the regulated services, which is a flat-out form of censorship; and even if Zauderer is the applicable test, it seems likely to fail that test as well.

So if this provision gets struck and the TOS publication and delivery provision doesn’t do anything helpful, it leaves the obvious question: why is the California legislature working on this and not the many other social problems in our state? The answer to that question is surely dispiriting to every California resident.

Reposted, with permission, from Eric Goldman’s Technology & Marketing Law Blog.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws”

Subscribe: RSS Leave a comment
5 Comments
Anonymous Coward says:

May be there needs to be a new law, eg masnicks law, any law that effects social media ,apps or services that host ugc, content made by the public is likely to be complied with across the united states which will have the lowest common denominator effect ,
eg the worst most burdensome laws will effect all users regardless of the fact it is legally in force in texas, florida etc .
theres no practical way for facebook to be applying different state laws in different states .
theres no way the public will read pages of moderation guidlines or policys,
this will simply help trolls or spammers to game the system and make facebook google stronger ,since they have the staff to moderate and display moderation policys.
like tos terms of services ,only lawyers, civil
servants or people looking to sue will read all this content.
its well known that facebook uses programs to choose what content to display ,how to rank it, post with short
video content may be shown before simply text posts

Darkness Of Course (profile) says:

Whose nephew read DB for Dummies

The details are highly suspicious. Somebody sat down with a rum and Coke, then imagined what they would do if they owned one of those businesses, and folded that with the claims about how their business was impacted because of things they have never understood.

The details all add up to the people sponsoring the bill are trying to make every possible limitation on “their” business practices illegal.

Is this going to pass because of the rural/city mix of Senators/Representatives in California?

Lostinlodos (profile) says:

WOT, but not what you think

“what is a âsemi-publicâ profile”

One in which some portion less that the entirety is publicly. The remaining private or invite only.

bounded system

You have it right, a walled garden. Though I’m not sure the likely intended targets, Facebook or Twitter, qualify. As both have a public content existence off the platform.

connection

Contact of interaction

“âcloud storage or shared document…what does this exclusion mean?”

You know exactly what they mean. Unfortunately it’s bad for a law to be so generic.

…“Amazon exclusionâ? If so, Iâm not sure they are getting their moneyâs worth.”

Why does everyone forget eBay and DVD empire?

“Should we call this the âNetflix exclusionâ?”

Well, there’s also stuff like Apple Music and tubi, or maybe it’s the real Amazon carve out for Prime Video?
But as for differentiation with UGC, UGC may be self created, authorised, licensed, or illegal.

“Exceptions include:

private interpretations that are not shared to protect against gaming,
We’re never going to agree on this but I’m hardenedly against not listing rules and prohibitions.

private interpretations that are too detailed for public consumption,
Huh? Just… huh?

private interpretations that governments ask/demand the services donât tell the public about,
Well, guss this law overrules that for state and sub-state jurisdiction. But government secrecy actions like that should be banned as unconstitutional. They violate due process.

private interpretations that are made on the fly in response to exigencies,
How hard is it to edit a page and add a word of few.

one-off exceptions,
Shouldn’t be made. If your rule is detrimental enough to require exceptions then your rule is flawed and should be reversed.

and more.
Uhoh.

“could mean noncompliance with the billâs requirements”

That’s a good thing. There should never be non-public rules.

“bill would control and skew the servicesâ editorial decisions”

Or, change the TOS and then act on the new rule.

“They currently arenât required to disclose ANY”

They don’t hose user content. Beyond op-Ed and commentary. But I believe op-Ed and commentary rules should be publicly published.

TOS must include

All sounds good here.

“âHow automated content moderation systems enforce terms of service”

That’s a real problem! The very fact that automated includes AI, that’s not actually possible. Unless we jettisoned self modifying moderation tools. Which I don’t consider a bad idea.

“âHow the social media”

Looks like all the good things at the beginning of the bill, noted above, are for not as now the bill is going off the cliff!

“it looks like the state is demanding the info purely as a raw exercise of power”

See, here they have crammed together a good beginning, with good ideas, then shoved a bunch of bad ideas in the second half.

As far as I can tell there’s one quick easy solution. One that Democrats hate. Completely stop the censorious act of deletion and switch to user level flag-and-hide. A method I support. But totally against the Democratic push.
And a method that generally makes much of the second half of the bill a non-issue.

…requires the publication of policies and practices that justifiably should not be published…

Well, since I reject that adding defined rules as they are dodged is not an undue burden, but the actual job of moderation…

In more realistic view, we have a problem here. The targeted content suppliers isn’t detailed and this will go far beyond the like of Facebook and Twitter (social media).

The NYT hosts UGC! And falls into the monetary range. So does WaPo. The NYP, and CNN and FoxNews.
Many news sites have forums or chats or interactive interaction.

And here’s a nice conspiracy moment. At first glance it appears to be an attempt to make services show the public exactly how they moderate, completely.
But
I question if this is more a Dem attack on “conservative” media.
Forcing them to display what theY cover allows directed attacks on anything not included.
Or alternative media. I can think of dozens of alt ideas that trigger the left and the right.
And how fast will this be used to trash a site that actual doesn’t delete “except as required by law”.

Aside from my preference of simply allowing community moderation alone, EARBL, which removes any coverage via this bill->

A company could simply black hole Cal IP addresses. Nice. Then what?
I’ve stated such an option about Fl and Tx as well.
It’s not likely. Much of a law like this could be destroyed in court. By an effected company.
But I wouldn’t disregard some companies choice to simply say FUCAL and bam, gone.

Laws like this are “look at me” laws. ‘Look, I doed some stuffs’
The sole goal in the real world is to point out you tried in the next election cycle.

Like Florida and Texas, I think the goal is to pass a law and have it shut down later. For the moment you can say I DID, vs didn’t.

Tanner Andrews (profile) says:

Definitions

First, what is a “semi-public” profile,

How about a profile that is only visible to ``members” of the service. If wankporn.com shows its user profiles only to logged-in users, then those would be semi-public.

Second, what does “a bounded system” mean?

I weill take it as a ``walled garden”. For instance of you can have wankporn.com friends, who by definition must have profiles on that site and who may post content on that site, then you have a bounded system. Even if you can link out from wankporn.com to other web sites, the idea is that there is a collection of material on wankporn.com and you can browse within that ``walled garden” secure in the knowledge that you will have only appropriate content so long as you stay on that website.

None of this is to suggest that regulation, even with sound definitions for semi-public profiles and bounded systems, could possibly comport with the U.S. First Amendment.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...