from the interesting-experiment dept
Over the last few months, we’ve seen various internet sites (mainly Twitter and Facebook) experimenting, adjusting policies, and (well, to be honest) scrambling to put in place various policies to deal with the concerns many have about how their sites will be used for mis- and disinformation regarding the election. Most of this has been focused on questions about what content to remove, or what (and how) to “flag” certain content. Last week, Twitter announced some bigger plans to adjust some elements of how the site works at least through the election. It’s an interesting experiment, and it will be worth following to see how effective the changes are.
It’s ramping up its limitations on sharing content that Twitter deems to be “misleading.” This is the content that, as of a few months ago, Twitter is appending a bit more information to, and (sometimes) limiting the ability to reply or retweet. Twitter is now going even further on those tweets, but still short of removing them entirely.
In addition to these prompts, we will now add additional warnings and restrictions on Tweets with a misleading information label from US political figures (including candidates and campaign accounts), US-based accounts with more than 100,000 followers, or that obtain significant engagement. People must tap through a warning to see these Tweets, and then will only be able to Quote Tweet; likes, Retweets and replies will be turned off, and these Tweets won?t be algorithmically recommended by Twitter. We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets.
Twitter recently started popping up a message to some users if you try to retweet a link to a news article without clicking through — in effect urging you to click and read the article before you actually retweet it. I’ve seen this idea discussed in the past, and while it adds a bit of friction on retweeting, I doubt it will be particularly effective. Often when I’ve seen it, it’s been on articles I’ve already read, so the warning is more of a nuisance than anything else, and I’m guessing most people will just click through without hesitating. However, with the new change the company is going even further, encouraging people to add their own commentary:
First, we will encourage people to add their own commentary prior to amplifying content by prompting them to Quote Tweet instead of Retweet. People who go to Retweet will be brought to the Quote Tweet composer where they?ll be encouraged to comment before sending their Tweet. Though this adds some extra friction for those who simply want to Retweet, we hope it will encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation. If people don?t add anything on the Quote Tweet composer, it will still appear as a Retweet.
That is going to be very interesting. I have a personal mental model on which things I just retweet and which I add commentary too, and I wonder for me, personally, whether these prompts will change my behavior. I do think that there is some value in encouraging people to add their thoughts, though who the hell knows how it will work in practice. Still, at the very least it strikes me as (again) a free speech supportive change — encouraging more commentary, rather than focusing on taking down content.
Second, we will prevent ?liked by? and ?followed by? recommendations from people you don?t follow from showing up in your timeline and won?t send notifications for these Tweets. These recommendations can be a helpful way for people to see relevant conversations from outside of their network, but we are removing them because we don?t believe the ?Like? button provides sufficient, thoughtful consideration prior to amplifying Tweets to people who don?t follow the author of the Tweet, or the relevant topic that the Tweet is about. This will likely slow down how quickly Tweets from accounts and topics you don?t follow can reach you, which we believe is a worthwhile sacrifice to encourage more thoughtful and explicit amplification.
Again the focus here is on friction — though this “feature” (showing tweets liked by people loosely connected to you) has always been somewhat controversial — especially given that people use “like” to mean very different things. I was among those who thought that Twitter never should have been showing this content in the first place, but having seen a fair amount of it in my feed, will admit that it has been more relevant and interesting than I had expected. I don’t know how much friction this will actually add, nor how significant it will be, but it’s noteworthy that Twitter decided to at least temporarily kill this particular feature after people had been complaining about it for a while.
Finally, we will only surface Trends in the ?For You? tab in the United States that include additional context. That means there will be a description Tweet or article that represents or summarizes why that term is trending. We?ve been adding more context to Trends during the last few months, but this change will ensure that only Trends with added context show up in the ?For You? tab in the United States, which is where the vast majority of people discover what?s trending. This will help people more quickly gain an informed understanding of the high volume public conversation in the US and also help reduce the potential for misleading information to spread.
This is also a good move — and I’ve definitely been seeing more of this over the last few weeks on Twitter. Many have argued that Twitter should do away with the trends feature altogether, as it’s usually hot garbage, and frequently gamed. However, a version that has more context could potentially work better. I’m skeptical, but will be watching it.
Twitter also says that it will not allow anyone to claim victory in an election unless a major news organization has called the election. I’m not sure how they will be able to make this work in practice:
People on Twitter, including candidates for office, may not claim an election win before it is authoritatively called. To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls. Tweets which include premature claims will be labeled and direct people to our official US election page
Unless they pre-review tweets from candidates, I don’t see how that will work. Indeed, Wired suggested that Twitter should have gone one step further and put Trump’s tweets on a time delay to review them before letting them go through:
Why not put Donald Trump?s tweets and his Facebook posts, as well as those of other political elites, on a time delay? (See here for a smart and similar earlier proposal focused on how a delay might strengthen national security.) Twitter and Facebook have extensive and well-documented content rules that prohibit everything from electoral to health disinformation. The platforms have singled out these categories of content in particular because they have significant likelihood of causing real world harm, from voter suppression to undermining the Centers for Disease Control and Prevention?s public health guidelines. The FBI found that the plot to kidnap Michigan governor Gretchen Whitmer was, in part, organized in a Facebook group.
To date, the enforcement of these policies has been spotty at best. Twitter has labeled some of the president?s tweets as ?potentially misleading? to readers about mail-in ballots. The platform hid a Trump tweet stating ?when the looting starts, the shooting starts? for ?glorifying violence,? and it recently hid another tweet equating Covid-19 to the flu, claiming that the president was ?spreading misleading and potentially harmful information? when he wrote that ?we are learning to live with Covid, in most populations far less lethal!!!” Facebook has taken similar actions, providing links to reliable voter and health information and removing posts that it deems violate its policies.
But these actions often take hours to put in place while this content racks up thousands of engagements and shares. In those hours, as recent research from Harvard shows, Trump is a one-man source of disinformation that travels quickly and broadly across Twitter and Facebook. And we know that the mainstream media often picks up on and amplifies Trump?s posts before platforms moderate them. Journalists report on platforms? treatments of Trump?s tweets, making that and them the story, and giving life to false claims.
I shudder to think about the ways in which some would argue this removes Section 230 for such content (even though there are cases ruling that pre-reviewing content doesn’t take away 230). However, it does seem like a potentially interesting move as well.
All in all, Twitter did seem to focus not on holding back speech, but on putting in a little friction, in an attempt to at least slow down the possibility for misinformation and disinformation to spread. I don’t know if it will work, but it’s at least a more thoughtful approach than the way that many just call for more aggressive pulling down of information. Still, I find the Trump campaign’s response to these changes quite telling:
Yet according to Samantha Zager, deputy national press secretary for the Trump campaign, Twitter?s changes are ?extremely dangerous for our democracy? by ?attempting to influence this election in favor of their preferred ticket by silencing the President and his supporters.?
Except that the changes do no such thing. The only thing it does is seek to slow down the pace at which you could spread mis- and disinformation. So, unless the Trump campaign is flat out admitting that it needs mis- and disinformation to win its campaign, then that statement is bizarre. But, of course, that does seem to be the stance of Trumpworld these days. It needs misinformation to win. That it’s so quick to admit that, though, should still be somewhat shocking.
Filed Under: content moderation, disinformation, election, friction