from the disappointing dept
It’s not just Ajit Pai who is an FCC chair who misunderstands Section 230. His predecessor, Tom Wheeler continues to get it totally wrong as well. A year ago, we highlighted Wheeler’s complete confusion over Section 230, that blamed Section 230 for all sorts of things… that had nothing at all to do with Section 230. I was told by some people that they had talked to Wheeler and explained to him some of the mistakes in his original piece, but it appears that they did not stick.
This week he published another bizarre and misguided attack on Section 230 that gets a bunch of basic stuff absolutely wrong. What’s weird is that in the last article we pointed to, Wheeler insisted that social media websites do no moderation, because of 230. But in this one, he’s now noting that 230 allowed them to close down the accounts of Donald Trump and some other insurrectionists — but he’s upset that it came too late.
These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.
Right. Except that… the reason they have “ample ability” is because they know they can’t be sued over those choices thanks to Section 230 and the 1st Amendment. Wheeler’s real complaint here is that these private companies didn’t act as fast as he wanted to in pulling down 1st Amendment protected speech. Then he misrepresents how Section 230 itself works:
Subsection (2) of Section 230 provides that a platform shall not be liable for, ?Any action voluntarily taken in good faith to restrict access to or availability of material that any provider or user considers to be?excessively violent, harassing, or otherwise objectionable?? In other words, editorial decisions by social media companies are protected, as long as they are undertaken in good faith.
This is… only partially accurate, and very misleading. First of all, editorial decisions by companies are protected by the 1st Amendment. Second, subsection (2) almost never comes into play, and the vast, vast majority of Section 230 cases around moderation say that it’s subsection (c)(1), not (c)(2), that gives companies immunity from lawsuits over moderation. Assuming that it’s (c)(2) alone leads you into dangerously misleading territory. Even worse, (c)(2) has two subsections as well, and when Wheeler says that it applies “as long as they are undertaken in good faith” he ignores that (c)(2)(B) has no such good faith requirement.
Of course, in the very next paragraph, he admits that (c)(1) is what grants the companies immunity, so I’m not even sure why he brings up (c)(2) and the good faith line. That’s almost never an issue in Section 230 cases. But the crux of his complaint is that he seems to think it’s obvious that social media should have banned Trump and Trump cultists earlier — and he invokes the classic “nerd harder” line:
Dealing with Donald Trump is a targeted problem that the companies just addressed decisively. The social media companies assert, however, that they have no way to meaningfully police the information flowing on their platform. It is hard to believe that the brilliant minds that produced the algorithms and artificial intelligence that powers those platforms are incapable of finding better outcomes from that which they have created. It is not technological incapacity that has kept them from exercising the responsibility we expect of all other media, it is the lack of will and desire for large-scale profits. The companies? business model is built around holding a user?s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.
This is a commonly stated view, but it tends to reveal a near total ignorance with how these decisions are made. These companies have large trust and safety teams, staffed with thoughtful professionals who work through a wide variety of trade-offs and challenges in making these decisions. While Wheeler is over here saying that it’s obvious that the problem is they waited too long and didn’t nerd harder to remove these people earlier, you have plenty of others out there screaming that this proves the companies are too powerful, and they should be barred from banning him.
Anyone who thinks it’s a simple business model issue has never been involved in any of these discussions. It’s not. There are a ton of factors involved, including what happens if you make this move and there’s a legal backlash? Or what happens if you’re driving all the cultists into underground sites where we no longer know what they’re planning? There are lots of questions and demanding that these large companies with a variety of competing interests must do it to your standard is the height of privilege. It’s impossible to do moderation “right.” Because there is no “right.” There are just a broad spectrum of wrong.
It’s fine to say that companies can do better. It’s fine to suggest ways to make better decisions. But too many pundits and commentators act as if there’s some “correct” decision and any result that differs from that cannot possibly be right. And, even worse, they blame Section 230 for that — when the reality is that Section 230 is what enables the companies to explore different solutions, as both Twitter and Facebook have done for years.
Wheeler’s “solution” for reforming Section 230 is also ivory tower academic nonsense that seems wholly disconnected from the reality of how content moderation works within these companies.
Social media companies are media, not technology
Mark Zuckerberg testified to Congress, ?I consider us to be a technology company because the primary thing we do is have engineers who write code and build product and services for other people.? That software code, however, makes editorial decisions about which information to choose to route to which people. That is a media decision. Social media companies make money by selling access to its users just like ABC, CNN, or The New York Times.
Even though he says this is an idea for reform… it’s just a statement? And a meaningless one at that. It doesn’t matter if they’re media or technology. They’re a mixture of both and something new. Trying to lump them into old buckets doesn’t help and doesn’t take us anywhere useful. And, honestly, if your goal here is to reform Section 230, declaring these companies media companies doesn’t help, because media companies and their editorial decisions are wholly protected by the 1st Amendment.
There are well established behavioral standards for media companies
The debate should be over whether and how those standards change because of user generated content. The absolute absence of liability afforded by Section 230 has kept that debate from occurring.
Um. No. Again, these are not the same as traditional media companies. They have some similarities and some differences. Section 230 doesn’t change anything. And if Tom Wheeler honestly thinks that there hasn’t been a debate about behavioral standards on content moderation, then he honestly shouldn’t be commenting on this. There has been an active discussion and debate on this stuff for years. The fact that he’s ignorant of it doesn’t mean it doesn’t happen. Indeed, the very fact that he doesn’t know about the debate that has gone on among trust and safety professionals and the executives at these companies going back many, many years suggests that Tom Wheeler should perhaps take some time to learn what’s really going on before declaring from on high what he thinks is and is not happening.
But the key point here is that the standards of traditional media companies don’t work well for social media because of the very differences in social media. A regular media company has standards because they need to review a very, very limited amount of content each day, on the order of dozens of stories. A social media company often has millions or billions of pieces of content every day (or in some cases every hour). The unwillingness to comprehend the difference in scale suggests someone who has not thought these issues through.
Technology must be a part of the solution
When the companies hire thousands of human reviewers it is more PR than protection. Asking humans to inspect the data constantly generated by algorithms is like watching a tsunami through a straw. The amazing power of computers created this situation, the amazing power of computers needs to be part of the solution.
I mean… duh? Is there anyone who doesn’t think technology is a part of the solution? Every single company with user generated content, even tiny ones like us, make use of technology to help us moderate. And there are a bunch of companies out there building more and more solutions (some of them very cool!). I’m confused, though, how this matters to the Section 230 debate. Changing section 230 will not change the fact that companies use technology to help them moderate. It won’t suddenly make more technology to help companies moderate. This whole point makes it sound like Tom Wheeler didn’t ever bother to actually speak to an expert on how content moderation works — which, you know, is kind of astounding when he then positions himself to give advice on how to force companies to moderate.
It is time to quit acting in secret
When algorithms make decisions about which incoming content to select and to whom it is sent, the machines are making a protected editorial decision. Unlike the editorial decisions of traditional media whose editorial decisions are publicly announced in print or on screen and uniformly seen by everyone, the platforms? determinations are secret: neither publicly announced nor uniformly available. The algorithmic editorial decision is only accidentally discoverable as to the source of the information and even that it is being distributed. Requiring the platforms to provide an open API (application programming interface) to their inflow and outflow, with appropriate privacy protections, would not interfere with editorial decision-making. It would, however, allow third parties to build their own algorithms so that, like other media, the results of the editorial process are seen by all.
So, yes, some of this I agree with. I mean, I wrote a whole damn paper on trying to move away from proprietary social media platforms to a world built on protocols. But, the rest of this is… again, suggestive of someone who has little knowledge or awareness of how moderation works.
First, I don’t see how Wheeler’s analogy with media even makes sense here. There are tons of editorial decisions that the public will never, ever know about. How can he argue that they’re “publicly announced”? The only information that news media makes public is what they finally decide to publish or air. But… that’s not near the entirety of editorial decision making. We don’t see what stories never make it. We don’t see how stories are edited. We don’t see what important facts or quotes are snipped out. We don’t see the debates over headlines. We have no idea why one story gets page 1 top of the page treatment, while some other story gets buried on A17. The idea that media editorial is somehow more public than social media moderation choices is… weird?
Indeed, in many ways, social media companies are way more transparent than traditional media companies. They even have transparency reports that have details about content removals and other information. I’ve yet to see a mainstream media operation do that about their editorial practices.
Finally demanding “transparency” is another one of those solutions that occurs to people who have never done content moderation. I recently wrote about the importance of transparency, but the dangers of mandated transparency. I won’t rehash that all over again, but the debate is not nearly as simple as Wheeler makes it out to be. But a few quick points: transparency reports have already been abused by some governments to allow them to celebrate and push for ever greater censorship of criticism of the government. We should be concerned about that. On top of that, transparency around moderation can be extremely costly and again create a massive burden for smaller players.
But perhaps one of the biggest issues with the kind of transparency that Wheeler is asking for is that it assumes good faith on the part of users. I’ve pointed out a few times now that we’ve had our comment moderation system in place for over a decade now and in that time the only people who have ever demanded “more transparency” into how it works are those looking to game the system. Transparency is often demanded from the worst of your users who want to “litigate” every aspect of why their content was removed or why they were banned. They want to search for loopholes or accuse you of unfair treatment. In other words, despite Wheeler’s whole focus being on encouraging more moderation of voices he believes are harmful, forced transparency is likely to cut down on that, as it gives those moderated more “outs” or limits the willingness of companies to moderate “edge” cases.
The final paragraph of Wheeler’s piece is so egregious and so designed to make a 1st Amendment lawyer’s head explode, that I’m going to go over it sentence by sentence.
Expecting social media companies to exercise responsibility over their practices is not a First Amendment issue.
Uh… expecting social media companies to exercise responsibility over their practices absolutely is a 1st Amendment issue. The 1st Amendment has long been held to both include a prohibition on compelled speech, as well as a right of association (or non-association). That is, these companies have a 1st Amendment right to moderate as they see fit, and to not be compelled to host speech, or be forced to associate with those they don’t want to associate with. That’s why many of the complaints are really 1st Amendment issues, not Section 230 issues.
Relatedly, it feels like some of the problems with Wheeler’s piece is he’s bought into the myth that with Section 230 there are no incentives to moderate at all. That’s clearly false, given how much moderation we’ve seen. The false thinking is driven by the belief that the only incentive to moderate is the law. That’s ridiculous. The health of your platform is dependent on moderation. Keeping your users happy, and not having your site turn into a garbage dump of spam, harassment and hate, is a very strong motivator for moderation. Advertisers are another motivation, since they don’t want their ads appearing next to bigotry and hatred. The focus on the law as the main lever here is just wrong.
It is not government control or choice over the flow of information.
No, but changing Section 230… would do that. It would force companies to change how they moderate. This is a reason why Section 230 is so important. It gives companies (and users!) a freedom to experiment.
It is rather the responsible exercise of free speech.
Which… all of these companies already do. So what’s the point here?
Long ago it was determined that the lie that shouted ?FIRE!? in a crowded theater was not free speech. We must now determine what is the equivalent of ?FIRE!? in the crowded digital theater.
Long time Techdirt readers will already be screaming about this. This claim is not just wrong, it’s very, very ignorant about the 1st Amendment. The “falsely shouting fire in a crowded theater” line was a throwaway line in an opinion by Justice Holmes that was actually about jailing someone for handing out anti-war pamphlets. It was never actually standard for 1st Amendment jurisprudence, and was effectively overturned in later cases, meaning it is not an accurate statement of law.
Tom Wheeler is very smart and thoughtful on so many things, that it perplexes me that he jumps into this area without bothering to understand the first thing about Section 230, the 1st Amendment, or content moderation. There are experts on all three things that he could talk to. But even more ridiculous, even assuming everything he says is accurate — what actual policy proposal does he make in this piece? Tech companies should use tech in their moderation efforts? That seems like the only actionable point.
There are lots of bad Section 230/content moderation takes out there, and I can’t respond to them all. But this is the former chair of the FCC, and when he speaks, people pay attention. And it’s extremely disappointing that he would jump into this space headfirst with so many factual errors and mistaken assumptions. It’s doubly troubling that this is the second time (at least!) that he’s now done this. I hope that someone at Brookings, or someone close to him suggests he speak to some actual experts before speaking on this subject again.
Filed Under: 1st amendment, content moderation, fcc, free speech, section 230, tom wheeler