from the have-a-little-respect dept
Last month I wrote about how, contrary to the weird narrative, Twitter has actually been among the most aggressive companies fighting for free speech online. While many people criticize it, they are wrong, or just uninformed. Mostly, they think (falsely) that because Twitter doesn’t want some speech that you like on their site, it somehow means they’re against free speech. The reality is a lot more complicated, of course. As we pointed out, former Reddit CEO Yishan Wong’s long thread about content moderation highlighted that people doing content moderation generally aren’t making decisions based on politics, they just want people to stop fighting all the time.
Recently, the Washington Post had an excellent article about Twitter’s Vijaya Gadde, the company’s top lawyer, who also runs their trust and safety efforts, that talks about how she is a strong defender of free speech, who also recognizes that, to support free speech, you have to come up with plans to deal with abusive, malignant users. That doesn’t mean automatically banning them, but exploring the solution space to see what kinds of programs you can put in place to limit the destructive nature of some users.
I recognize that this is a space filled with people who insist their emotional beliefs are the be all and end all when it comes to content moderation, but it would be nice if at least some of those people were willing to actually read through articles like this, that highlight how many different trade-offs and nuances there are in these discussions.
Twitter colleagues describe Gadde’s work as difficult but necessary and unmotivated by political ideology. Defenders say her team, known as the trust and safety organization, has worked painstakingly to rein in coronavirus misinformation, bullying and other harmful speech on the site, moves that necessarily limit some forms of expression. They have also disproportionately affected right-leaning accounts.
But Gadde also has tried to balance the desire to protect users with the values of a company built on the principle of radical free speech, they say. She pioneered strategies for flagging harmful content without removing it, adopting warning labels and “interstitials,” which cover up tweets that break Twitter’s rules and give people control over what content they see — strategies copied by Twitter’s much larger rival, Facebook.
The article also details how she has lead the company’s aggressive pushback against foreign laws that are real attacks on free speech:
For years, she has been the animating force pushing Twitter to champion free expression abroad. In India and Turkey, for example, her team has resisted demands to remove content critical of repressive governments. In 2014, Gadde made Twitter the only Silicon Valley company to sue the U.S. government over gag orders on what tech companies could say publicly about federal requests for user data related to national security. (Five other companies settled.)
Contrast that with Elon Musk, who quickly endorsed the EU’s approach to platform regulation at a time when Twitter, under Gadde’s leadership, has been pushing back against parts of that plan, by noting how it conflicts with basic free speech concepts.
The article highlights, as we have tried to do for years, that content moderation is a complicated and nuanced topic, that doesn’t fit neatly with the arguments around “free speech.” Part of this is that social media isn’t just about speech, but about being able to get your speech in front of a specific audience. People mostly don’t care if you spout bullshit nonsense on your own website where only those who seek it out can find it, but due to the nature of Twitter and how it connects users, it allows people to inject their speech into the notifications of others — and that creates elements for abuse and harassment, that actually harm free speech, by driving people out of the wider discussion entirely.
There is, obviously, some level of balance here. Not all criticism, hell, most criticism isn’t abusive or harassing, even if it may feel that way to those on the receiving end of it. But anyone trying to build an inclusive and trustworthy forum needs to recognize that bad actors push thoughtful users away. And at least some plan needs to be in place to deal with that.
But, part of that, is that Twitter’s DNA has always been to favor more speech over less, and the company really only pushes back in fairly extreme cases when pushed to the edge, and where no other decision is reasonably tolerable, if the site wants to keep users.
Even as the company took action to limit hate speech and harassment, Gadde resisted calls to police mere misinformation and falsehoods — including by the new president.
“As much as we and many of the individuals might have deeply held beliefs about what is true and what is factual and what’s appropriate, we felt that we should not as a company be in the position of verifying truth,” Gadde said on a 2018 Slate podcast, responding to a question about right-wing media host Alex Jones, who had promoted the falsehood on his show, Infowars, that the Sandy Hook school shooting was staged.
The company was slammed for statements like this at the time, but believed strongly that it was drawing the line in a place that made the most sense to be broadly inclusive. Of course, that line moves over time as the context and the world around us moved. In the early days of the pandemic, with people dropping dead everywhere, at some point, most people are going to realize that spreading more information that leads to more people dying feels morally disturbing.
It’s not out of any political beliefs, or a desire to “censor” viewpoints. It’s just a basic moral stance on how to help the public stay alive.
The company, also under her leadership, pushed for alternative tools to dealing with misinformation, rather than the go to move of taking down content:
Meanwhile, Gadde and her team were working with engineers to develop a warning label to cover up tweets — even from world leaders such as Trump — if they broke the company’s rules. Users would see the tweet only if they chose to click on it. They saw it as a middle ground between banning accounts and removing content and leaving it up.
In May 2020, as Trump’s reelection campaign got underway, Twitter decided to slap a fact-checking label on a Trump tweet that falsely claimed that mail-in ballots are fraudulent — the first action by a technology company to punish Trump for spreading misinformation. Days later, the company acted again, covering up a Trump tweet about protests over the death of George Floyd that warned “when the looting starts, the shooting starts.” More such actions followed.
And while some people insisted that this was a form of “censorship,” it was actually the opposite. It was literally “more speech” responding to speech that Twitter felt was problematic. Twitter was one of the first companies to use this approach as an alternative to removing speech… and yet it still resulted in very angry people insisting it was proof of censorship.
Anyway, there’s a lot more in the article, but it’s a really good and thorough look not just at the various tradeoffs and nuances at play, but also how Twitter’s current management made some of those decisions, not to try to silence voices, but quite the opposite.
Filed Under: content moderation, free speech, vijaya gadde
Companies: twitter