ExTwitter Unfortunately Loses Round One In Challenging Problematic Content Moderation Law

from the well,-that's-unfortunate dept

Back in September we praised Elon Musk for deciding to challenge California’s new social media transparency law, AB 587. As we had discussed while the bill was being debated, while it’s framed as a transparency bill, it has all sorts of problems. It would (1) enable the California government officials (including local officials) to effectively put pressure on social media companies regarding how they moderate by enabling litigation for somehow failing to live up to a terms of service, (2) make it way more difficult for social media companies to deal with bad actors by limiting how often they can change their terms of service, and (3) hand bad and malicious actors a road map for being able to claim they’re respecting the rules, while clearly abusing them.

Yet, the largest social media companies (including Meta and Google) apparently are happy with the law, because they know it creates another moat for themselves. They can deal with the compliance requirements of the law, but they know that smaller competitors cannot. And, because of that, it wasn’t clear if anyone would actually challenge the law.

A few Twitter users sued last year, but with a very silly lawyer, and had the case thrown out because none of the plaintiffs had standing. But in the fall, ExTwitter filed suit to block the law from going into effect, using esteemed 1st Amendment lawyer Floyd Abrams (though, Abrams has had a series of really bad takes on the 1st Amendment and tech over the past decade or so).

The complaint still seemed solid, and Elon deserved kudos for standing up for the 1st Amendment here, especially given the larger tech companies’ unwillingness to challenge the law.

Unfortunately, though, the initial part of the lawsuit — seeking a preliminary injunction barring the law to go into effect — has failed. Judge William Shubb has sided with California against ExTwitter, saying that Elon’s company has failed to show a likelihood of success in the case.

The ruling relies heavily on a near total misreading of the Zauderer case, regarding whether or not compelled commercial speech was allowed under the 1st Amendment. As we discussed with Professor Eric Goldman a while back, reading Zauderer, you see that the case was ruled on narrow grounds, saying you could mandate transparency if it was about the text in advertisements, required disclosure of purely factual information, the information disclosed would be uncontroversial, and required the disclosure to be about the terms of an advertiser’s service. If all those conditions are met, the law might still be found unconstitutional if the disclosure requirements are not related to preventing consumer deception or if the disclosure requirements are unduly burdensome.

As professor Goldman has compellingly argued, laws requiring social media companies reveal to government officials their moderation policies meet basically none of the Zauderer conditions. It’s not about advertising. It’s not purely factual information. The disclosures can be extremely controversial. The disclosures are not about any advertiser’s services. And, on top of that, it has nothing to do with preventing consumer deception and the requirements can be unduly burdensome.

A New York Court threw out a similar law, recognizing that Zauderer shouldn’t be stretched this far.

Unfortunately, Shubb goes the other way, and argues that Zauderer makes this kind of mandatory disclosure compatible with the 1st Amendment. He does so by rewriting the Zauderer test, leaving out some of the important conditions, and then mis-applying the test:

Considered as such, the terms of service requirement appears to satisfy the test set forth by the Supreme Court in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), for determining whether governmentally compelled commercial disclosure is constitutionally permissible under the First Amendment. The information required to be contained in the terms of service appears to be (1) “purely factual and uncontroversial,” (2) “not unjustified or unduly burdensome,” and (3) “reasonably related to a substantial government interest.”

The court admits that the compelled speech here is different, but seems to think it’s okay, citing both the 5th and 11th Circuits in the NetChoice cases (who both also applied the Zauderer test incorrectly — which is why we pointed out this part of the otherwise strong 11th Circuit decision was going to be a problem):

The reports to the Attorney General compelled by AB 587 do not so easily fit the traditional definition of commercial speech, however. The compelled disclosures are not advertisements, and social media companies have no particular economic motivation to provide them. Nevertheless, the Fifth and Eleventh Circuits recently applied Zauderer in analyzing the constitutionality of strikingly similar statutory provisions requiring social media companies to disclose information going well beyond what is typically considered “terms of service.”

Even so, this application of the facts to the misconstrued Zauderer test… just seems wrong?

Following the lead of the Fifth and Eleventh Circuits, and applying Zauderer to AB 587’s reporting requirement as well, the court concludes that the Attorney General has met his burden of establishing that that the reporting requirement also satisfies Zauderer. The reports required by AB 587 are purely factual. The reporting requirement merely requires social media companies to identify their existing content moderation policies, if any, related to the specified categories. See Cal. Bus. & Prof. Code § 22677. The statistics required if a company does choose to utilize the listed categories are factual, as they constitute objective data concerning the company’s actions. The required disclosures are also uncontroversial. The mere fact that the reports may be “tied in some way to a controversial issue” does not make the reports themselves controversial.

But… that’s not even remotely accurate on multiple accounts. It is not “purely factual information,” that is required to be disclosed. The disclosure is about the highly subjective and constantly changing processes by which social media sites choose to moderate. Beyond covering way more than merely factual information, it’s also extraordinarily controversial.

And that’s not just because they’re often tied to controversial issues, but rather because users of social media are constantly “rules litigating” moderation decisions, and insisting that websites should or should not moderate in certain ways. The entire point of this law is to try to pressure websites to moderate in a certain way (which alone should show the Constitutional infirmities in the law). In this case, it’s California trying to force websites to remove “hate speech” by demanding they reveal their hate speech policies.

Now, assuming most of you don’t like hate speech, you might not see this as all that controversial, but if that’s allowed, what’s to stop other states from requiring the same thing regarding how companies deal with other issues, like LGBTQ content. Or criticism of the police.

But, the court here insists that this is all uncontroversial.

And worse, it ignores that the Zauderer test is limited only to issues of consumer deception.

The California bill has fuck all to do with consumer deception. It is entirely about pressuring websites in how they moderate.

Also, Shubb shrugs off the idea that this law might be unduly burdensome:

While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law.

The Court also (again, incorrectly in my opinion) rejects ExTwitter’s reasonable argument that Section 230 pre-empts this. Section 230 explicitly exempts any state law that seeks to limit a website’s independence in making moderation decisions, and thus this law should be pre-empted as such. Not so, says the court:

AB 587 is not preempted. Plaintiff argues that “[i]f X Corp. takes actions in good faith to moderate content that is ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,’ without making the disclosures required by AB 587, it will be subject to liability,” thereby contravening section 230. (Pl.’s Mem. (Docket No. 20) at 72.) This interpretation is unsupported by the plain language of the statute. AB 587 only contemplates liability for failing to make the required disclosures about a company’s terms of service and statistics about content moderation activities, or materially omitting or misrepresenting the required information. See Cal. Bus. & Prof. Code § 22678(2). It does not provide for any potential liability stemming from a company’s content moderation activities per se. The law therefore is not inconsistent with section 230(c) and does not interfere with companies’ ability to “self-regulate offensive third party content without fear of liability.” See Doe, 824 F.3d at 852. Accordingly, section 230 does not preempt AB 587.

Again, this strikes me as fundamentally wrong. The whole point of the law is to force websites to moderate in a certain way, and to limit how they can moderate in many scenarios, thus creating liability for moderation decisions regarding whether or not those decisions match the policies disclosed to government officials under the law. That seems squarely within the pre-emption provisions of Section 230.

This is a disappointing ruling, though it is only at stage one in this case. One hopes that Elon will appeal the decision and hopefully the 9th Circuit has a better take on the matter.

Indeed, I’d almost hope that this case were one that makes it to the Supreme Court, given the makeup of the Justices on the Supreme Court today and the (false, but whatever) belief that Elon has enabled “more free speech” on ExTwitter. It seems like this might be a case where the conservative Justices might finally understand why these kinds of transparency laws are problematic, by seeing how California is using them (as opposed to the Florida and Texas laws it’s reviewing currently, where that wing of the Supreme Court is more likely willing to side with those states and their goals).

Filed Under: , , , , , , , , , ,
Companies: twitter, x

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “ExTwitter Unfortunately Loses Round One In Challenging Problematic Content Moderation Law”

Subscribe: RSS Leave a comment
39 Comments
nerdrage (profile) says:

Re: in that case...

Unmoderated content is not popular with advertisers and they pay the bills.

Maybe the state of California should just step back and let the free market solve this conundrum. Content moderation is the cost of doing business for ad-based business models. Or to put it another way: if you don’t want to pay mods, then put up a paywall in lieu of advertising revenue.

This comment has been deemed funny by the community.
blakestacey (profile) says:

This chapter shall not apply to any of the following:
(a) A social media company that generated less than one hundred million dollars ($100,000,000) in gross revenue during the preceding calendar year.
(b) A service that exclusively conveys email.

Well, the good news is that Mastodon instances are safe, because ActivityPub is just like email!*

*source: every attempt to explain ActivityPub

Anonymous Coward says:

A judge who misreads/misinterprets a Supreme Court decision this badly is either incompetent or pursuing a personal objective. In either case, we need a way to remove these judges from their position. The judicial system acts at a glacial pace and is far too expensive for both sides of a case. Idiotic rulings such as this one just add to the expense, because an appeal will surely override this nonsense.

This comment has been deemed insightful by the community.
urza9814 says:

Is it a contract or not?

I think the real question here is if the terms of service is a contract or if it’s just a vague non-binding promise.

If it’s a contract, then they have to provide precise terms of that contract to all parties. And if they can provide that contract to all of their millions of users then they can surely provide it to the government as well. And limiting how often that contract can be unilaterally changed seems pretty reasonable; I certainly don’t have the time to be re-reading a new ToS every goddamn day! Although it is pretty silly to only apply such laws to social media…

These documents are certainly written as though they’re a contract. I have to agree to it as though it’s a contract — you don’t have to check a box to agree to some vague advertised promise the way you usually do with a ToS. If they’re going to make me “sign a document” saying I agree to their terms, then yeah they should be legally required to comply with those terms themselves and they shouldn’t be able to change those terms on me every single day or every single hour. I don’t see the problem here. Contracts go both ways.

Software companies have spent DECADES trying to get these clickwrap agreements to get treated like contracts; now that the governments are actually doing that they’re complaining that it’s too burdensome?

Stephen T. Stone (profile) says:

Re:

You can’t treat a TOS like a contract because then you’ll have rules-lawyering trolls looking for loopholes, which would require a bunch of additions to a TOS over time to handle all the possible edge cases, which would in turn require notifying every user of all these TOS changes every time the TOS changed, which in turn loops us back to the rules-lawyering trolls. No TOS can ever cover every possible edge case for content moderation⁠—nor should it.

Ethin Probst (profile) says:

Re: Re:

The problem with this argument is that companies do treat a TOS as a contract. There’s always (somewhere) in the TOS something like “By using x, you agree to be bound by…”. The usage of the word “bound” or similar implies legal force, which implies that the TOS is, in fact, a contract. If the companies don’t want these treated like contracts, maybe they shouldn’t be writing them like one, and should make it very clear that the terms are not legally binding or enforceable.

Rocky says:

Re:

Software companies have spent DECADES trying to get these clickwrap agreements to get treated like contracts; now that the governments are actually doing that they’re complaining that it’s too burdensome?

You mean a “contract” foisted upon a buyer after they bought a piece of software for money?

Compare that to being presented with “To use our service you agree to abide by our TOS/AUP” first.

There is a very big difference between the two.

Ethin Probst (profile) says:

Re: Re:

Not really. It’s still a contract either way, and companies are allowed to use them as weapons of legal enforcement. It would make sense then that the government should treat them as such. As I said above: if these companies don’t want their TOS considered as a contract of sorts — regardless of whether it’s presented before or after the agreement is made — then they shouldn’t write them as contracts. Simple. Maybe I’m misunderstanding something but I view TOSs as contracts because pretty much every TOS I’ve read looks exactly like a contract.

Rocky says:

Re: Re: Re:

Not really

Forcing a contract on someone after they spent money on a product has some legal implications which is why click-wrap agreements are largely unenforceable in the US.

And I think you misinterpreted what I was saying because I didn’t say a TOS/AUP isn’t a contract, I said there’s a difference in how they go about it and in which order. Whether a TOS/AUB is a real contract or not doesn’t really matter because a service-owner still have full control of their property and who they want to associate with.

The difference I’m talking about here is that a TOS/AUP tries to make sure that a user is aware of the rules before accepting them while a click-wrap license is a take or leave choice after someone spent money. The latter isn’t what contract law describes as a “meeting of the minds” and is why such licenses are legally dubious.

Rocky says:

Re: Re: Re:3

It’s simple, those who are mostly affected by it doesn’t have the money to take the companies to court.

It’s also a way for the companies to give notice of their rights (copyright and intellectual) and also to limit their liability in some ways. If they stop using these types of licenses the legal grey areas vastly expanded since someone could argue that the companies didn’t assert their rights or that they are suddenly liable for 3rd party actions using their software.

Anonymous Coward says:

A set of moderation policies does not constitute purely factual information, in the sense that the company might very well take moderation decisions which violate its own policies, if the company determines that the policies are wrong. Any statement along the lines of “we will remove content if …” is a prediction, not a fact.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

No countries have free speech besides the US, including Europe

…And the blue states very much want to be closer to Europe. Including the judges. The 1A will be gone in a blink if you let it.

Of course, this is essentially just formalizing government pressure campaigns to induce censorship that exists already — including CA, which the 9th circuit upheld….and you crowed about at the time. Of course there’s a lot more evidence of just how bad it all was and maybe you want to distance yourself from those claims now, right?

And I really love how this is all proving Musk right, and he’s putting principle over profit, and despite your opening sentence (which I do give you a little bit of credit for) you kinda studiously avoid saying “yeah, Musk was right about all of this shit”

Just picture me with with a cup of tea, looking smugly into the rain.

Strawb (profile) says:

Re:

No countries have free speech besides the US, including Europe

Europe is a continent, not a country, and most western countries have laws protecting freedom of speech. Stop drinking the freedom cool-aid.

And I really love how this is all proving Musk right

Hey, even broken clocks are right twice a day.

he’s putting principle over profit

For once in his life. He completely buckled when foreign governments wanted to control content on Twitter.

despite your opening sentence (which I do give you a little bit of credit for) you kinda studiously avoid saying “yeah, Musk was right about all of this shit”

That’s because he wasn’t right “about all this shit”. He is right that this particular law is shit, and should be challenged. That’s it.

Bob says:

Not a surprise

Judge William Shubb is not a particulary good judge although he occasionally gets things right. Obviously, in this case he got it wrong.

I had the displeasure of sitting in his courtroom in Sacramento in 2014 in support of a friend who went before him to ask for the termination of his federal supervised release. He’d been on it five years and had no problems whatsoever. What did Shubb do? He listened to his probation officer’s lies and exaggerations, accepted them as Gospel, and denied the petition. My friend never had a chance in his court.

Fortunately, Judge Shubb is a senior judge and is 85 years old. He’ll likely be dead soon and, assuming Biden wins a second term, it’s likely he will be appointing his successor, hopefully someone who will be a more impartial and rational jurist.

Jesse T (profile) says:

2 more cents

Hi all- It’s been a while since I’ve been here- Mike’s excellent piece on Substack as a nazi bar brought me back & I’m glad to be back.

I give that introduction because just reading news articles about this case and then this article reminds me that I’m jumping into Techdirt in the middle of the river & the current’s already taken me a few miles downstream before I’ve even had time to come up for air.

From my reading of the Digest of AB587, they are asking for a social media site’s terms of service aka moderation policy (if any) and a regular report of activity taken under the auspices of that policy.

I don’t see anything about ‘preferred’ moderation policies or ‘protected classes’ or ‘incitement of violence’ or anything else that’s usually included in moderation policies. In other word, there’s no suggested or compulsory language for these policies.

It’s requiring a corporation to file this policy with the state- like it files its by-laws, its corporate officers, its compliance with labor & tax laws, etc.

From what I’ve read, it is mere reporting & data collection. The state then compiles the information and the public can have access to the t&cs of social media sites. If anything, that gives us better data about what sites are & are not permitting and gives us more information about which ones to support/avoid.

The state does the same thing with all kinds of data, from non-profit by-laws to campaign donations to restaurant cleanliness inspection results.

If I am misreading this, how exactly am I misreading it? Is there an earlier Techdirt piece I should read?

Thanks!

That One Guy (profile) says:

Re:

The first paragraph gives a quick explanation as to why the bill is problematic but if you want a more extensive explanation the first and third links in that paragraph go to older articles covering Twitter’s legal challenge and the bill itself respectively.

A tl;dr version could perhaps be ‘if the bill was just about transparency that would be problematic enough, but it’s much, much worse than that as it seems to assume that everyone is and always will be acting in good faith when a major problem of moderation is dealing with people who aren’t.’

Jesse T (profile) says:

Re: Re:

“It would (1) enable the California government officials (including local officials) to effectively put pressure on social media companies regarding how they moderate by enabling litigation for somehow failing to live up to a terms of service”

AB528 doesn’t have any enforcement mechanisms in it, correct? So this is a future law… and if a company has Terms & Conditions, shouldn’t it follow those since it “demands” that the public follows them?

I’m not seeing a problem here, beyond the government ensuring that private companies follow their own rules, which is something they do across the for-profit & non-profit business landscape already.

That One Guy (profile) says:

Re: Re: Re: 'The rules ban saying the word 'green'.' 'I didn't, I said 'the color of a pine tree'.'

If it has no enforcement mechanism then it’s a bad law that deserves to be scrapped because it does nothing and is merely legislative air guitar, and the problem with your ‘well they just need to follow their own rules’ argument is the very thing I was talking about when I said that the law and it’s supporters are acting like everyone is acting in good faith when moderation is dealing with those that aren’t.

Moderation rules must be flexible and constantly updated because there will always be people rules-lawyering them to find loopholes and behavior that’s not explicitly banned just yet, on top of those that will violate the rules but claim that they didn’t because of the site misread/misapplied it’s own rules, which makes the state treating them as a static thing and having the ability to punish sites because they don’t think the site enforced it’s rules correctly not only a huge boon to dishonest actors but a massive incentive for the site to moderate less, which is just as much of a first amendment violation as if the state passed a law that directly banned moderation as some others have tried.

Anonymous Coward says:

If they are not in the United States they don’t have to obey US laws

Age verification laws in several states are one example.

The dime-a-dozen offshore pirate IPTV sites I have mention also have around 200 porn channels out of 20,000.

Since none of them are in the United States, age verification laws cannot be enforced upon them.

These sites, in India, China, Russia, and Singapore are not subject to any United States laws.

In addition to a lot of movies, sports, and nearly evertgg TV network on earth, there are porn channels with no age verification, just a credit card or Bitcoin to subscribe.

Age verification states have no jurisdiction over these offshore pirate IPTV sites which also include porn as well.

United States laws have no jurisdiction in Singapore, China, Holland, or Russia. So to those states I say good luck in enforcing your age verification laws on offshore IPTV sites.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...