Utah's Horrible, No Good, Very Bad, Terrible, Censorial 'Free Speech' Bill Is A Disaster In The Making

from the that's-not-how-any-of-this-works dept

A month ago, we noted that a bunch of state legislatures were pushing blatantly unconstitutional bills to try to argue that social media websites can't moderate "conservative" views any more. All of these bills are unconstitutional, and most are just silly. The latest one comes from Utah -- and stunningly seems to have legs as it's been moving through the Utah legislative process with some amount of speed over the last week or so.

The bill, SB0228 from state Senator Michael McKell, is so bizarrely wrong on just about everything that it makes Utah look really bad. It's called the "Freedom from Biased Moderation Act" and already that's a pretty clear 1st Amendment problem. Leaving aside the question of whether or not there's any evidence about "anti-conservative bias" in social media moderation (and, just so we're clear: there is no such evidence), if there were moderation decisions that were biased against political viewpoints that would be protected by the 1st Amendment. For good reason.

Courts have made it clear, repeatedly, that the 1st Amendment bars the government from compelling anyone to associate with speech they disagree with. Yet, that's exactly what this bill, and others like it, are seeking to do. Any law that bars the ability to moderate would violate this key part of the 1st Amendment.

But this bill is even more nefarious, in that it couches many of its proposals in ideas that sound reasonable, but they only sound reasonable to people who are totally ignorant of how content moderation works. A key part of the bill is that it requires social media companies to "clearly communicate" the "moderation practices" including "a complete list of potential moderation practices." That's ridiculous, since many cases are unique, and any company doing this stuff has to constantly be responding to changing context, different circumstances, new types of attacks and abuse, and a lot more. This bill seems to presume that every content moderation decision is an obvious paint-by-numbers affair. But that's not how it works at all. Senator McKell should be forced to listen to Radiolab's Post No Evil episode, which details how every time you think there's an easy content moderation rule, you discover a dozen exceptions to it, and you have to keep adjusting the rules. Every damn day.

Then the bill says that a social media company can not "employ inequitable moderation practices." But what does that even mean? Again, every moderation decision is subjective in some way. When we ran 100 content moderation professionals through a simulator with 8 different content moderation decisions, we couldn't get any level of agreement. Because so many of these are judgment calls, and when you have thousands or tens of thousands of moderators making thousands to hundreds of thousands to millions of these judgment calls every day, you're always going to be able to find some "inequitable" results. Not because of "bias" but because of reality.

And how would you even define "inequitable" in this situation anyway? Because context matters a lot. All sorts of factors come into play. Someone in a position of power saying something can be very different from someone not in power saying the exact same thing. Something said after an attack on the Capitol might look very different than something said before an attack on the Capitol. Every situation is different. Demanding the same treatment ignores that the situations will always be different.

Indeed, just recently in discussing Facebook bending over to keep Trumpists happy on the platform we noted that the company was confusing equitable results with equitable policies. Part of the issue is that, right now, you have more utter nonsense and misinformation on the Republican side of the aisle, and if you use "equitable policies" you will get inequitable results. But this bill seems to think that inequitable results must mean inequitable policies. But that's... just wrong.

Next, the bill requires a "notice" from a social media company for any moderation, that includes an explanation of exactly why that content was moderated. Again, I understand why people often think this is a good idea (and there are times when it would be nice for companies to do this, because it is often frustrating when you are moderated and it's not clear why). However, this only works if you're dealing with non-abusive, honest actors. But a significant amount of moderation is to deal with dishonest, abusive actors. And having to explain to each one of them exactly why they're being moderated is not a recipe for better moderation, it's a system for (1) having to litigate every damn moderation decision as the person will complain back "but I didn't do that" even when they very clearly did, and (2) it's training abusive, dishonest trolls how to game the system.

Then, the bill has this odd section where it basically would attempt to force social media companies to hire Facebook's Oversight Board (or some other brand new entity that does the same basic thing) as an independent review board:

A social media corporation shall engage the services of an independent review board to review the social media corporation's content moderation decisions.

While this might be a good idea for some companies to explore, for the government to mandate it is absolutely ridiculous. I thought the Republican Party was about keeping government out of business, not telling them how they have to run their business. It gets even sillier, because the Utah legislature thinks that it gets to dictate what types of people can be on such independent review boards:

The independent review board shall consist of at least 11 members who represent a diverse cross-section of political, religious, racial, generational, and social perspectives.

The social media corporation shall provide on the social media corporation's platform biographies of all of the members of the independent review board.

If this law actually passed, and wasn't thrown out as obviously unconstitutional, I'd love to see the Utah legislature determining if the mandated review board for... let's say OnlyFans, had the proper "religious, generational, and social" diversity...

As an enforcement mechanism, the bill would give the Utah Attorney General the ability to take action against companies deemed violating this law (which, again, would be every company because it sets up a nonsensical standard not based in reality).

This bill is an unconstitutional garbage fire of nonsense, completely disconnected from anything even remotely recognizable as to how content moderation works at social media companies. Utah should reject it, and maybe should get someone to teach Senator McKell some basics about the 1st Amendment and how social media actually works.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: 1st amendment, bias, content moderation, inequity, michael mckell, utah
Companies: facebook, twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    That One Guy (profile), 1 Mar 2021 @ 8:40pm

    Re: Re: Re:

    Into the fray then...

    from what I can see the bill doesn't compel sites to host or block any specific user content

    Specific, no, because those attacking 230 are rarely honest enough to own why they're attacking it but they have grokked that specifics are very much not their friend in this argument, hence the usual generality of saying that social media is taking down 'conservative' content without actually defining what that is and pointing to specific examples. That said if you leave the door open to punishing a company for moderation decision you are very much levying a threat against them for doing so, 'encouraging' them to modify their moderation practices, either taking more down or less depending on how the law is written(in this case much, much less).

    only that they must publish their terms of use and moderation decisions and apply those rules without discrimination.

    Terms of service are already published(you kinda have to agree with them to use the site, not the platform's fault no-one reads them), so that bit is nothing more than empty theater for the gullible unless you/they were talking about specific moderation rules which is also theater but more naive/dishonest, something I'll address that in a second. That said honestly the article addressed this one better than I think I could so I'm just going to copy/paste their stuff and add in some commentary.

    A key part of the bill is that it requires social media companies to "clearly communicate" the "moderation practices" including "a complete list of potential moderation practices." That's ridiculous, since many cases are unique, and any company doing this stuff has to constantly be responding to changing context, different circumstances, new types of attacks and abuse, and a lot more. This bill seems to presume that every content moderation decision is an obvious paint-by-numbers affair. But that's not how it works at all. Senator McKell should be forced to listen to Radiolab's Post No Evil episode, which details how every time you think there's an easy content moderation rule, you discover a dozen exceptions to it, and you have to keep adjusting the rules. Every damn day.'

    Contrary to what some politicians think/pretend to think moderation is not simple, and that's before you scale things up to tens of millions of users. Context matters and there will always be people looking to game or bypass the rules, which is why it's vital to have flexibility in order to moderate as anything too rigid will block a lot of content that might otherwise be flagged but is fine in context, while still letting bad actors squeak through by claiming that the rules didn't specifically bar what they did so it would be unfair to punish them and leaving the platform constantly having to update their rules to keep up.

    Speaking of bad actors...

    'Next, the bill requires a "notice" from a social media company for any moderation, that includes an explanation of exactly why that content was moderated. Again, I understand why people often think this is a good idea (and there are times when it would be nice for companies to do this, because it is often frustrating when you are moderated and it's not clear why). However, this only works if you're dealing with non-abusive, honest actors. But a significant amount of moderation is to deal with dishonest, abusive actors. And having to explain to each one of them exactly why they're being moderated is not a recipe for better moderation, it's a system for (1) having to litigate every damn moderation decision as the person will complain back "but I didn't do that" even when they very clearly did, and (2) it's training abusive, dishonest trolls how to game the system.

    This one really shouldn't need more explanation than what was in the article honestly. Requiring a platform to justify every gorram moderation decision is not only going to be a massive pain in the ass and 'encourage' platforms to moderate vastly less it assumes good faith on the part of the moderated which is frankly naive in the extreme. No-one's going to admit that yeah, they knew they were breaking the rules but did X anyway when they know they can lie and escape punishment or face nothing more than a slap on the wrist and empty warning, and if they're feeling vindictive there would always be the option to claim unfair discrimination, an accusation that would carry very real risks to the platform.

    Adding to the problem by forcing platforms to explain exactly what triggered the penalty you're practically handing trolls and other bad actors a cheat-sheet on what specific acts/words to avoid, as they can just make minor changes and you're back to square one explaining that yes, that word/act counts as a violation as well even if they assure you that they were just trying to follow the rules by not using the specific word you called them on the last time.

    You've also got the wildly insane bit about mandatory 'oversight' boards and what exactly they will be staffed with(and oh the conflicts of interest and deadlocks I could see in those...), and I would hope I don't have to explain how the government telling a company not only that they must moderate in a certain way but they are required to pay to put together a group to double-check their work is problematic from a first amendment perspective.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.