Polling The Public About Social Media Policies Turns Up Nothing Particularly Useful
from the say-what-now? dept
Someone emailed to call my attention to some new survey results out of the University of South Florida’s Center for Cybersecurity, which contained public opinion polls about internet regulation (and gas prices, but that’s a bit outside our wheelhouse). The key part that was highlighted to me was:
More than half of Floridians (52%) say that platforms such as Twitter are “private spaces” that should be regulated only by private companies. Far less (28%) view such platforms as “public squares” where government should regulate content
And, of course, that’s interesting, because the Florida government, led by Ron DeSantis, eagerly passed a law to argue the exact opposite. Of course, that law has since been tossed out (by two separate courts, no less) as unconstitutional.
So, for all the talk of how this content moderation law was to help Floridians, it sure seems like most Floridians don’t really want DeSantis dictating social media content moderation policies either.
Of course, as you dig deeper into the data… it gets less interesting, not more.
First off, the survey design here is awful. To an embarrassing degree. The question regarding government regulation of the internet frames the question as meaning that if the government regulated social media it would do so in a manner forcing sites to remove “false, misleading, or hateful” content which is literally the exact opposite of what Florida’s social media regulation actually does. It compels sites to keep that content up.
Both of these approaches are equally unconstitutional, but it’s weird to frame the regulatory push as only going in one direction, when in the very state where this poll is taking place, the actual regulatory attempts are exactly the opposite kind of unconstitutional.
The Center also polled the Floridian public what it thought about Elon Musk’s purchase of Twitter and his ideas of what Twitter should do, and it includes the line that 50% of responses believe Twitter should “only limit offensive content if it’s illegal.” The breakdown here is also kinda weird:
Of course, that’s not so simple. Nearly all offensive content is not illegal. And… the respondents seem to recognize this, because just a few questions later they’re asked if Twitter should remove content deemed “false/misleading” and “harmful/dangerous to individuals or groups” and in both cases, respondents overwhelmingly said such content should be removed.
They were also asked if social media platforms “have a responsibility to restrict content that is false/misleading” and overwhelmingly people agreed.
But, most of that content is not illegal.
So, according to this, Twitter should only remove content that is illegal, but also not only should remove lots of perfectly legal content, but indeed has a responsibility to do so.
Perhaps this survey really says more about people’s understanding of what speech is legal than really how social media platforms should act.
For what it’s worth the survey’s strongest point of agreement seems to be that Twitter needs to work on its bot problem… and again, I don’t get that. It seems like a narrative issue, rather than reality. Elon Musk has made “bots” and “spam” a central theme of his whole takeover experience, but the vast majority of Twitter users don’t run into many bots or spam. The biggest accounts do, but most accounts don’t.
And since this question on the actual survey was framed as eliminating “non-human accounts,” that’s also weird, because many, many “non-human” accounts are actually quite useful. Some are informative — like bots that tweet earthquake reports or weather forecasts. Some are just entertaining, tweeting out random trivia or artwork. It’s the spam accounts that are a problem, but not all bots/non-human accounts are spam, and not many users really have to deal with that much spam.
So, the only thing this survey really seems to show is that you don’t learn much from polling random people about social media policies. Except maybe (1) they don’t understand how the 1st Amendment works and (2) they have been suckered by a narrative about bots.