from the let-us-count-the-ways dept
So, I already had a quick post on the bizarre decision by the 5th Circuit to reinstate Texas’ social media content moderation law just two days after a bizarrely stupid hearing on it. However, I don’t think most people actually understand just how truly fucked up and obviously unconstitutional the law is. Indeed, there are so many obvious problems with it, I’m not even sure I can do them adequate justice in a single post. I’ve seen some people say that it’s easy to comply with, but that’s wrong. There is no possible way to comply with this bill. You can read the full law here, but let’s go through the details.
The law declares social media platforms as “common carriers” and this was a big part of the hearing on Monday, even though it’s not at all clear what that actually means and whether or not a state can just magically declare a website a common carrier (as we’ve explained, that’s not how any of this works). But, it’s mainly weird because it doesn’t really seem to mean anything under Texas law. The law could have been written entirely without declaring them “common carriers” and I’m not sure how it would matter.
The law applies to “social media platforms” that have more than 50 million US monthly average users (based on whose counting? Dunno. Law doesn’t say), and limits it to websites where the primary purpose is users posting content to the site, not ones where things like comments and such are a secondary feature. It also excludes email and chat apps (though it’s unclear why). Such companies with over 50 million users in the US probably include the following as of today (via Daphne Keller’s recent Senate testimony): Facebook, YouTube, Tiktok, Snapchat, Wikipedia, and Pinterest are definitely covered. Likely, but not definitely, covered would be Twitter, LinkedIn, WordPress, Reddit, Yelp, TripAdvisor, and possibly Discord. Wouldn’t it be somewhat amusing if, after all of this, Twitter’s MAUs fall below the threshold?! Also possibly covered, though data is lacking: Glassdoor, Vimeo, Nextdoor, and Twitch.
And what would the law require of them? Well, mostly to get sued for every possible moderation decision. You only think I’m exaggerating. Litigator Ken White has a nice breakdown thread of how the law will encourage just an absolutely insane amount of wasteful litigation:
As he notes, a key provision and the crux of the bill is this bizarre “anti-censorship” part:
CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on:
(1) the viewpoint of the user or another person;
(2) the viewpoint represented in the user’s expression or another person’s expression; or
(3) a user’s geographic location in this state or any part of this state.
(b) This section applies regardless of whether the viewpoint is expressed on a social media platform or through any other medium.
So, let’s break this down. It says that a website cannot “censor” (by which it clearly means moderate) based on the user’s viewpoint or geographic location. And it applies even if that viewpoint doesn’t occur on the website.
What does that mean in practice? First, even if there is a good and justifiable reason for moderating the content — say it’s spam or harassment or inciting violence — that really doesn’t matter. The user can simply claim that it’s because of their viewpoints — even those expressed elsewhere — and force the company to fight it out in court. This is every spammer’s dream. Spammers would love to be able to force websites to accept their spam. And this law basically says that if you remove spam, the spammer can take you to court.
Indeed, nearly all of the moderation that websites like Twitter and Facebook do are, contrary to the opinion of ignorant ranters, not because of any “viewpoint” but because they’re breaking actual rules around harassment, abuse, spam, or the like.
While the law does say that a site must clearly post its acceptable use policy, so that supporters of this law can flat out lie and claim that a site can still moderate as long as it follows its policies, that’s not true. Because, again, all any aggrieved user has to do is to claim the real reason is due to viewpoint discrimination, and the litigation is on.
And let me tell you something about aggrieved users: they always insist that any moderation, no matter how reasonable, is because of their viewpoint. Always. And this is especially true of malicious actors and trolls, who are in the game of trolling just to annoy in the first place. If they can take that up a notch and drag companies into court as well? I mean, the only thing stopping them will be the cost, but you already know that a cottage industry is going to pop up of lawyers who will file these cases. I wouldn’t even be surprised if cases start getting filed today.
And, as Ken notes in his thread, the law seems deliberately designed to force as much frivolous litigation on these companies as possible. It says that even if one local court has rejected these lawsuits or blocked the Attorney General from enforcing the law, you can still sue in other districts. In other words, keep on forum shopping. Also, it has a nonmutual claim and issue preclusion, meaning that even if a court says that these claims are bogus, each new claim must be judged anew. Again, this seems uniquely designed to force these companies into court over and over and over again.
I haven’t even gotten to the bit that says that you can’t “censor” based on geographic location. That portion can basically be read to be forcing social media companies to stay in Texas. Because if you block all of your Texas users, they can all sue you, claiming that you’re “censoring” them based on their geographic location.
So, yeah, here you have the “free market” GOP passing a law that effectively says that social media companies (1) have to operate in Texas and (2) have to be sued over every moderation decision they make, even if it’s in response to clear policy violations.
Making it even more fun, the law forbids any waivers, so social media companies can’t just put a new thing in their terms of service saying that you waive your rights to bring a claim under this law. They really, really, really just want to flood every major social media website with a ton of purely frivolous and vexatious litigation. The party that used to decry trial lawyers just made sure that Texas has full employment for trial lawyers.
And that’s not all that this law does. That’s just the part about “censorship.”
There is the whole transparency bit, requiring that a website “disclose accurate information regarding its content management, data management, and business practices.” That certainly raises some issues about trade secrets, general security and more. But, it also is going to effectively require that websites publish all the details that spammers, trolls, and others need to be more effective.
The covered companies will also have to keep a tally over every form of moderation and post it in its transparency report. So, every time a spam posting is removed, it will need to be tracked and recorded. Even any time content is “deprioritized.” What does that mean? All of these companies recommend stuff based on algorithms, meaning that some stuff is prioritized and some stuff is not. I don’t care to see when people I follow tweet about football, because I don’t watch football. But it appears that if the algorithm learns that about me and chooses to deprioritize football tweets just for me, the company will need to include that in its transparency report.
Now, multiply that by every user, and every possible interaction. I think you could argue that these sites “deprioritize” content billions of times a day just by the natural functioning of the algorithm. How the hell do you track all the content you don’t show someone?!
The law also requires detailed, impossible complaint procedures, including a full tracking system if someone follows a complaint. That’s required as of last night. So best of wishes to every single covered platform, none of whom have this technology in place.
It also requires that if the website is alerted to illegal content, it has to determine whether or not the content is actually illegal within 48 hours. I’ll just note that, in most cases, even law enforcement isn’t that quick, and then there’s the whole judicial process that can take years to determine if something is illegal. Yet websites are given 48 hours?
Hilariously, the law says that you don’t have to give a user the opportunity to appeal if the platform “knows that the potentially policy-violating content relates to an ongoing law enforcement investigation.” Except, won’t this kind of tip people off? Your content gets taken down, but the site doesn’t give you the opportunity to appeal… Well, the only exemption there is if you’re subject to an ongoing law enforcement investigation, so I guess you now know there is one, because the law says that’s the only reason they can refuse to take your appeal. Great work there, Texas.
The appeal must be decided within 14 days, which sure sounds good if you have no fucking clue how long some of these investigations might take — especially once the system is flooded with the appeals required under this law.
And, that’s not all. Remember last week when I was joking about how Republicans wanted to make sure your inboxes were filled with spam? I had forgotten about the provision in this law that makes a lot of spam filtering a violation of the law. I only wish I was joking. For unclear reasons, the law also amends Texas’ existing anti-spam law. It added (and it’s already live in the law) a section saying the following:
Sec. 321.054. IMPEDING ELECTRONIC MAIL MESSAGES PROHIBITED. An electronic mail service provider may not intentionally impede the transmission of another person’s electronic mail message based on the content of the message unless:
(1) the provider is authorized to block the transmission under Section 321.114 or other applicable state or federal law; or
(2) the provider has a good faith, reasonable belief that the message contains malicious computer code, obscene material, material depicting sexual conduct, or material that violates other law.
So that literally says the only reasons you can “impede” email is if it contains malicious code, obscene material, sexual content, or violates other laws. Now the reference to 321.114 alleviates some of this, since that section gives services (I kid you not) “qualified immunity” for blocking certain commercial email messages, but only with certain conditions, including enabling a dispute resolution process for spammers.
There are many more problems with this law, but I am perplexed at how anyone could possibly think this is either workable or Constitutional. It’s neither. The only proper thing to do would be to shut down in Texas, but again the law treats that as a violation itself. What an utter monstrosity.
And, yes, I know, very very clueless people will comment here about how we’re just mad that we can’t “censor” people any more (even though it’s got nothing to do with me or censoring). But can you at least try to address some of the points raised above and explain how any of these services can actually operate without getting sued out of existence, or allowing all garbage all the time to fill the site?
Filed Under: 1st amendment, appeals, common carrier, content moderation, editorial discretion, email, free speech, hb20, litigation, social media, texas, transparency, viewpoint discrimination