“Polls are more prone to manipulation than almost anything else [on Twitter]. It’s interesting, given his [Elon’s] use of polls,” he added. Several other ex-Twitter employees gave similar assessments.
This seems particularly notable for two reasons: (1) Musk’s sudden reliance on polls for making big content moderation decisions, and (2) his formerly professed (though of questionable seriousness) claims about concerns regarding bots on the platform.
On point one, we already discussed the ridiculousness, and lack of seriousness, in using easily gamed polls as a tool for content moderation. While supporters like to argue it’s “democratic,” it has none of the actual hallmarks of integrity around the “voting.” And this report regarding the manipulation just makes that even clearer:
“When someone says. ‘Oh we must be protecting polls, right?’ No, we’re not,” the former Twitter employee told Rolling Stone.
In the years since the feature debuted, a small industry of spammers has cropped up to offer services manipulating the results of a Twitter poll with inauthentic accounts. The spammers allow users to buy votes in chunks, some offering 15,000 votes on a given poll for a little over $130 or smaller responses for just 19 cents a vote with “guaranteed fast delivery” that’s “100% Confidential.”
For what it’s worth, the Rolling Stone article perhaps gives a little too much credence to the idea that Musk ever seriously considered “bots” a problem on Twitter. It was always clear that it was a pretense to try to get out of the deal. So the fact that he pretended to care about bots on the platform for a few months shouldn’t be taken to mean he really believes it’s a problem. Especially right now when he desperately wants to show growth to woo back advertisers who have abandoned ship.
The Rolling Stone piece does a nice job also highlighting how Musk’s recent claims of big increases in the mDAU may also be facing the same issue as the polls: a lack of staff manually removing spammers:
But it’s not clear how much of that claimed growth is authentic. Asked whether those numbers could be inflated by spam accounts, the former Twitter staffer told Rolling Stone: “No fucking doubt.”
“Think about it: On any given week, [the security] team removed millions of accounts manually,” the source said.
Of course, on Wednesday, Musk publicly claimed that the site was removing a bunch of spammers:
Of course, somewhat hilariously, the purge ended up killing a bunch of high profile legitimate accounts. Early on, there were reports of some high profile “left leaning” critics of Musk who were removed leading to claims that the Muskian Twitter was dealing in “anti-left bias,” but as with the years of false claims under the old regime of “anti-conservative bias” the reality appeared to be much more mundane: the impossibility of doing content moderation well at scale. Indeed, some of the other accounts that were suspended included Elon Musk’s most vociferous number one fan.
Turns out content moderation, including dealing with spam and bots, is, you know, not easy.
Going to put this up front, because I expect a bunch of people to not read and assume something very incorrect: I think there are valid arguments (even pretty strong ones) for why it makes sense for social media platforms to allow Donald Trump on them (there are also valid arguments against it). But, conducting a poll is the stupidest possible way to make that decision. It’s Musk’s platform, and he’s free to run it however he wants, even making the stupidest possible decisions. But it should raise questions among its users whether or not they wish to embrace such a platform, and just how much damage Musk will do in pursuit of stunts.
Making serious decisions, which can have massive impact on people’s lives, through stunts is not just reckless, but it foreshadows much more dangerous decision-making to come.
Now, to the details: as you’re probably aware, over the weekend Elon Musk ran a poll on Twitter asking people whether or not Donald Trump’s Twitter account should be reinstated:
This is despite his earlier claim that no decisions would be made on changes to trust & safety policies or the reinstatement of accounts until a “content moderation council” could be convened:
While the poll started out with Trump heavily, heavily favored (which perhaps says something about Musk’s staunchest supporters), over the course of 24 hours, it moved more and more towards even, but ended (as you can see above) barely in the “yes” column. Musk then immediately announced that the public had spoken and he was reinstating the account, repeating the very, very stupid Latin phrase “vox populi, vox dei” (some recently departed Twitter employees informed me that he’s been saying this all the fucking time in meetings, and people are mocking him for it behind his back, but he seems to think it makes him sound cool).
A few minutes after that tweet went up, Trump’s account came back. As of me writing this, Trump has not tweeted again from the account. When asked, he has insisted that he’s staying on his own flailing platform Truth Social. According to SEC filings, part of Trump’s deal with Truth Social is that he signed a contract obligating him to use the site rather than other social media. There’s a literal clause in the agreement that his social media activity must appear exclusively on Truth Social for at least six hours. It’s in the section on “license agreement” and notes:
From December 22, 2021, until the expiration of 18 months thereafter, (the “TMTG Social Media Exclusivity Term”), President Trump has agreed to first channel any and all social media communications and posts coming from his personal profile to the Truth Social platform before posting that same social media communication and/or post to any other social media platform that is not Truth Social (collectively, “Non-TMTG Social Media”) until the expiration of “DJT/TMTG Social Media 6-Hour Exclusive” which means the period commencing when DJT posts any social media communication onto the Truth Social Platform and ending six (6) hours thereafter; provided that he may post social media communications from his personal profile that specifically relates to political messaging, political fundraising or get-out-the vote efforts at any time on any Non-TMTG social media platforms. Unless notice is given, the TMTG Social Media Exclusivity Term extends in perpetuity for additional 180-day terms.
Of course, Trump’s signature on a licensing agreement is about as trustworthy as Elon’s promise of no reinstatements until his content moderation council met.
For what it’s worth, in addition to reinstating Trump, Musk reinstated various other awful people, mocked the head of the ADL, Jonathan Greenblatt, (who I have policy differences with, but Musk’s doing so immediately resulted in a bunch of Twitter users gleefully sharing anti-Semitic comments, claiming Musk was signaling to them directly) and made it clear he is in favor of chaos for the sake of chaos with no concern over what harm it might do.
Well, there was one exception to that. Musk has said a few times now that he won’t reinstate Alex Jones, and when pressed on it, claimed that it was because “My firstborn child died in my arms. I felt his last heartbeat. I have no mercy for anyone who would use the deaths of children for gain, politics or fame.”
And, of course, this partly demonstrates the problem. Musk recognizes the potential harms in one area of trauma that he has personally experienced, but seems to not care one bit about harms others have experienced.
In some ways, it’s the worst of what people assume about content moderation on most websites: that it’s driven entirely by the whims of an out of touch billionaire CEO. In most cases, that’s not true. Here, Elon is making it clear that’s how it will work on his Twitter.
Perhaps equally problematic was that, this weekend, after Jordan Peterson played the “white man’s gambit” of arguing for less anonymity, and Jack Dorsey piped in to suggest that would be a bad idea, Musk popped in to note that “Verification through the payment system plus phones, but allowing pseudonyms is the least bad solution I can think of.”
Again, this is telling. Musk is focused on “the least bad solution” that he can think of, rather than, perhaps, talking to any of the many, many people who have actually studied this issue and found that forced verification is extremely dangerous for free speech, especially for those with legitimate reasons to fear for their safety. People speaking out against authoritarian rulers. People blowing the whistle on malfeasance. Victims of domestic violence or sexual assault calling it out.
But, again, Musk hasn’t experienced any of that personally, so why should it matter?
Bringing this back around to the point: it’s impossible to do content moderation well at scale. Everyone makes tons of mistakes. But there are real lessons out there on things that work well and things that are stupid and dangerous. And Musk is making it clear that he wants to ignore all of those lessons, and redo all the mistakes, perhaps making them worse in the process. It’s possible that he’ll run the learning curve and eventually land back where things kinda were before, but with less clarity and understanding, but we sorta predicted that back in April.
I’m not necessarily upset that Trump’s account is back (whether he tweets or not). I do think, however, the process by which Musk got there demonstrates a near total lack of concern for how any of this can and should work, and especially no concern for the harm he can do to others in the process.
Back when Twitter initially decided to issue a permanent ban on Trump, I wrote a long post detailing how such a decision could not be an easy one, and there were plenty of arguments against it. But, in the end, the various platforms had to weigh a variety of factors, including how responsible they wished to feel concerning the attempted overturning of an election. Similarly, when the Oversight Board was reviewing Facebook’s decision to ban Trump, we filed a comment that did not take a stand on either side of the central question, but did advocate for a much better process in how Facebook makes such a decision. We concluded that comment by noting:
There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.
And that takes us back to Musk’s decision making here. If you’re going to do content moderation and trust & safety, having some sort of underlying process and principles is important. That’s not to say they can’t change over time, or that they won’t face challenges as every possible edge case shows up, such that you realize that nearly every case feels like an “edge case” that doesn’t neatly play into the rules. But you need to have some sort of basic concepts behind what you’re doing.
Throwing it entirely open to a vote is, to put it mildly, crazy. I mean, for all of Musk’s silly pretentious “vox populai, vox dei” stuff, plenty of people have pointed out that the phrase originates from Alcuin of York in a letter to Charlemagne in 800, in which he warns that believing such a thing is dangerous:
“And those people should not be listened to who keep saying the voice of the people is the voice of God, since the riotousness of the crowd is always close to insanity.”
Of course, like so many things, there are situations where a “democratic” vote makes sense, and many where it does not. A purely democratic vote can be used to oppress a minority, for example. Also, a simple poll on Twitter… is not a representative sample. There are all sorts of problems with it. First of all, Musk set up a simple yes/no option, when it could be a lot more nuanced that. But by framing it the way he did, those are the only choices. Then there are the questions of who actually saw it and who voted. That’s not public at all.
Finally, for months (literally until a month ago), Musk insisted that Twitter was full of bots, not people. And, even here, he admitted partway through the vote that “bots” were voting. Though, of course, he insisted that it was only the people voting “no” who must be bots and “troll armies.” Again, that certainly does not suggest that anything about this poll is “the voice of the people.” Not only is he admitting that much of it is not, in his belief, he is publicly stating his own bias regarding what the correct answer should be.
Through all of this, Musk has made clear that the content moderation practices for Twitter are now whatever he thinks of, on a whim, that will be most entertaining for himself. He has no real process. He has no real principles. He does not care one bit about past lessons. He does not care about what damage or danger his whims may cause. None of that matters to him.
And voting is not how content moderation decisions should be made, at least not without significant effort and education going into the process. Merely asking people “yes or no” without detailing the tradeoffs, or the nuances, or the specific reasons why suggests a lack of concern not just for how all of this plays out, but for having an informed public weighing in at all.
He is, of course, free to do all of that (within certain limits). But it does not mean that people will enjoy being on is site, or that advertisers will feel comfortable putting their brands on the site. It has convinced me to spend less time there, as it does not feel safe at all, and I no longer have any confidence that there are people in a decision-making role at the company who can be trusted to want to do the right thing, even when the right thing may be impossible to do.
Musk does not care about doing the right thing. He cares about attention. It’s a choice he is free to make. But it’s not one that I need to support.
‘Biden’s Gestapo’? Trump Raid Hurts Voter Trust in FBI
And says this in the body of the post:
A new national telephone and online survey by Rasmussen Reports finds that 44% of Likely U.S. voters say the FBI raid on Trump’s Florida home made them trust the FBI less, compared to 29% who say it made them trust the bureau more. Twenty-three percent (23%) say the Trump raid did not make much difference in their trust of the FBI.
But it really did nothing of the sort. The poll data actually show nothing more than the amplification of echoes in chambers built specifically for the purpose of amplifying echoes.
The data say completely unsurprising things, like the fact that people prone to be pissed off about the FBI’s raid of Trump’s home are now angry at the FBI. The largest percentage of poll respondents who have a very unfavorable impression of the FBI following the raid are white, male Republicans above the age of 40 — more than double any other demographic.
And that trend holds, again unsurprisingly, when Rasmussen asked specifically about the Mar-a-Lago raid:
Non-unexpectedly, the same sort of responses were given to Rasmussen’s much more loaded question: “There is a group of ‘politicized thugs at the top of the FBI who are using the FBI… as Joe Biden’s personal Gestapo.”
One would think a national pollster might avoid directly quoting long-time political operative/Trump pardon recipient Roger Stone while conducting a poll, but here we are. Rasmussen does not note how many times poll respondents uttered the phrase “Let’s go, Brandon!” during these interactions.
This poll doesn’t show anything anyone could have assumed following the search of Trump’s house. Democrats trust the FBI just a bit more than they already did. Republicans got even angrier at an agency they really haven’t cared for since then-FBI Director James Comey rebuffed Trump’s demands for total fealty. And Comey was the one who won over Trump fans — at least momentarily — by publicly reopening the FBI’s investigation into Hillary Clinton’s private email server just days before the 2016 election.
But those whose love and hate of the FBI are closely tied to their political allegiances are dupes falling victim to short cons. The long con is the agency itself, which may not be the amoral entity it was under J. Edgar Hoover, but still has a long way to go before anyone should consider it inherently trustworthy.
It still is amazing to me how many people in the more traditional media insist that social media is bad and dangerous and infecting people’s brains with misinformation… but who don’t seem to recognize that every single such claim made about Facebook applies equally to their own media houses. Take, for example, CNN. Last week it excitedly blasted out the results of a poll that showed three fourths of adults believe Facebook is making society worse.
Now, there is an argument that Facebook has made society worse, though I don’t think it’s a particularly strong one. For many, many people, Facebook has been a great way to connect and communicate with friends and family — especially during a pandemic when many of us have been unable to see many friends and family in person.
Either way it’s undeniable that the traditional media — which, it needs to be noted, compete with social media for ad dollars — has spent the last five years blasting out the story over and over again that pretty much everything bad in the world can be traced back to Facebook, despite little to no actual evidence to support this. So, then, if CNN after reporting about how terrible and evil Facebook is for five years, turns around and polls people, of course most of them are going to parrot back what CNN and other media have been saying all this time. Hell, I’m kind of surprised that it’s only 76% of people who claim Facebook has made society worse.
I mean, just in the past couple months, every CNN story I can find about Facebook seems to be completely exaggerated, with somewhat misleading claims blaming pretty much everything wrong in the world on Facebook. It’s almost like CNN (and other media organizations) are in the business of hyping up stories to manipulate emotions — the very thing that everyone accuses Facebook of doing. Except with CNN, there are actual human employees making those decisions about what you see. Which is not how Facebook works. Here are just a few recent CNN stories I found:
I mean, if all my info about Facebook came from CNN, I’d agree that it was making society worse. But I could just as easily argue that CNN is making society worse by presenting a very misleading and one-sided analysis of anything having to do with Facebook. Hell, CNN is owned by AT&T, which (1) has been trying and failing to compete with Facebook in the internet ads business, and (2) literally paid to set up an outright propaganda network known as OAN. I think there’s tremendous evidence to suggest that AT&T is making society way worse than anything that Facebook has ever done.
This is not a defense of Facebook, because I still believe the company has lots and lots of problems. But the idea that a poll from CNN tells us anything even remotely useful or enlightening is just pure misinformation.
There’s been plenty of talk lately about the “Techlash” which has become a popular term among the media and politicians. However, what if the general public feels quite differently? Vox, which is not exactly known for carrying water for the tech industry, has released a new poll that shows that the public is overwhelmingly optimistic about technology, and thinks that technology has been a force for good in the world. This applies across the board for Democrats, Republicans, and independents.
Seventy-one percent of likely voters agreed with the tech-optimist statement: ?Technology is generally a force for good. Large tech companies have provided innovations like vaccines, electric vehicles, bringing down the cost of batteries that store green energy, vegetarian meat options, and other ways that have improved our quality of life.? Only 19 percent agreed with the tech-pessimist statement: ?Technology is generally a force for bad. Large tech companies are bad for workers, inequality, and democracy. The technological innovations they produce are not worth the cost.?
When put into chart form, the results are really, really striking:
Obviously, “technology” covers a lot more than the big internet companies — and the messaging that Vox tested highlights mostly non-internet innovations. But, still. The fact that the “control” group — ones who didn’t even receive the specific messaging — felt even stronger about the good technology does in the world than those who were first primed with statements about other kinds of technology is really something.
There are plenty of examples, certainly, of tech gone awry, but it really seems that the general public recognizes all of the good that innovation and technology have done for the world, and feel optimistic about it. Of course, none of that will stop “the narrative” of the techlash, because it’s just too useful for many pushing it.
It appears that the various election polls that predicted Joe Biden would become the 46th President of the United States eventually proved accurate — the current President’s temper tantrum notwithstanding — but that doesn’t mean the polls did a good job. In fact, most people are recognizing that the pollsters were wrong in many, many ways. They predicted a much bigger win for Biden, including multiple states that easily went to Trump. They completely flubbed many down ballot House and Senate races as well. Pollsters are now trying to figure out what went wrong and what these misses mean, coming on the heels of a set of bad predictions in 2016 as well. It’s likely there isn’t any simple answer, but a variety of factors involved.
However, what interests me is the simple fact that it turned out that the major polls were actually widely shared misinformation that spread all over social media, presenting incorrect information about the election — some of which almost certainly had the likelihood of impacting voting behavior.
Now, to be clear, I’m not saying the polls were disinformation deliberately spread with the knowledge that it was false. I’m saying they were misinformation. Information that turned out to be false, but was spread, often widely, by those who believed it or wanted to believe it. And, it was exactly the kind of misinformation that had a decent likelihood of impacting voting behavior.
But that leaves open a big question: with so many people (including many in the media and a few legislators) demanding that social media websites “crack down” on “misinformation”, especially with regards to an election, the fact that polling that turned out to be misinformation presents something of a challenge. I think most people would say that it would be crazy to say that social media shouldn’t allow polling information to be spread (or even to go viral). Yet, with so many people calling for a crackdown on “misinformation” how do you distinguish the two?
Some will argue that they only mean the kinds of misinformation that is being spread with ill-intent, though that quickly leaps over to disinformation or requires social media companies to be the arbiters of “intent,” which is not an easy task. Others will argue that this is more “well meaning” information, or that it’s merely a prediction. But lots of other misinformation could fall into that category as well. Or some might argue that accurately reporting on what the polls say isn’t misinformation — since it’s accurate reporting, even if the results don’t match the predictions. But, again, the same could be said for other predictive bits of misinformation as well.
In short: any of the ways you might seek to distinguish these polls, you can almost certainly apply back to other forms of misinformation.
I raise this issue primarily to ask that people think much more carefully about what they’re asking for when they demand that social media sites moderate “misinformation.” Especially with an incoming Biden administration that has already suggested that one of its policy goals is to target misinformation online. It’s one thing to say that, but it’s another thing altogether to define misinformation in a manner that doesn’t lead to plenty of perfectly legitimate information — such as these misleading polls — being targeted as well. At the very least, we should start to distinguish the important differences between misinformation and disinformation.
Perhaps, rather than demanding that the first response to misinformation be that it be removed, we should think about more ways to add more context around it instead.
We’ve seen all sorts of crazy defamation claims over the past few years, but this may be the dumbest. You may have heard that our thin-skinned President is very unhappy about various polls showing that the American public isn’t much interested in buying what he’s selling. He even hired a pollster, McLaughlin & Associates, with a notoriously terrible record to come up with new polls after seeing more polls that don’t reflect the reality he’d like. Which, of course, is his prerogative. It’s easier to hide from the truth if you can make up lies to surround yourself with.
The only problem: the other stuff is still out there. And so the Trump Campaign took things up a notch, sending CNN a letter demanding it retract the poll and apologize. CNN reported on this, but didn’t share the full letter. In what can only be described as a self-own, a legal adviser to the Trump Campaign, Jenna Ellis, decided to publish the letter she had sent to CNN. She claimed she was posting the full thing on Twitter because CNN’s reporting was an “attempt to skew the narrative.”
Except… the letter itself makes the story look even worse. The CNN article did only quote parts of the letter, the silly claims “it’s a stunt and a phony poll to cause voter suppression, stifle momentum and enthusiasm for the President” but left out the insane legal threat that the poll was defamatory. Yes, the poll. Defamatory. A poll? Defamatory. That is not how any of this works. But, from the letter:
The poll is intentionally false, defamatory and misleading, and designed to harm the Donald J. Trump for President, Inc. campaign.
You are officially on notice of this dispute and therefore are required to undertake steps to affirmatively preserve, and not delete, any and all physical and electronic documents, materials, information, and data that pertain in any way to the June 8, 2020 poll, including without limitation all emails, text messages, instant messages (IMs/DMs), letters and memoranda, articles, and social media postings (including all drafts as well as final version of all written communications), as well as other types of written, physical and digital materials, including handwritten notes, typerwritten notes, summaries, charts, receipts, audio recordings, video recordings, photographs, telephone call logs, calendar entries of al [sic] types, financial data and information, etc. that pertain in any way or might otherwise be relevant or related to the foregoing matters. All sources of documents, materials, information, and data should be preserved, including without limitation, physical files, electronic files, computer servers, email servers, backup tapes, cloud storage, personal computers, hard drives, smart phones, tablets, and other types of storage devices including external drives, thumb drives, zip drives, disks and DVDs. Failure to affirmatively preserve such documents and materials could result in severe sanctions imposted [sic] by a court which could include, among other remedies, monetary sanctions, evidentiary sanctions, issue sanctions and/or the striking of answer and entry of default judgment.
Yup, CNN. Make sure you retain those zip drives. What is this, 1996?
Anyway, a poll is not defamatory. By definition. And, of course, sending such a bogus defamation threat letter, full of that lawyerly garbage, all it’s really done is call that much more attention to just how badly Trump appears to doing right now, according to public perception.
Meanwhile, once the campaign published its letter, CNN decided to publish its response, written by the amazingly named General Counsel of CNN, David Vigilante:
To my knowledge, this is the first time in its 40 year history that CNN has been threatened with legal action because an American politician or campaign did not like CNN’s polling results.
To the extent we have received legal threats from political leaders in the past, they have typically come from countries like Venezuela or other regimes where there is little or no respect for a free and independent media.
CNN is well aware of the reputation of John McLaughlin and McLaughlin & Associates. In 2014 his firm famously reports that Eric Cantor was leading his primary challenger Dave Brat by 34 points only to lose by 11 points – a 45 point swing. The firm currently has a C/D rating from FiveThirtyEight.
In any event, McLaughlin was able to evaluate and criticize CNN’s most recent poll because CNN is transparent and publishes its methodology along with its polling results. Because of this, McLaughlin was free to publish his own critique of CNN’s analysis and share his criticisms across the U.S. media landscape. That’s how free speech works. It’s the American way.
Your letter is factually and legally baseless. It is yet another bad faith attempt by the campaign to threaten litigation to muzzle speech it does not want voters to read or hear. Your allegations and demands are rejected in their entirety.
Yes, but how do you really feel?
Somehow, I get the feeling that a federal anti-SLAPP law is unlikely under the current President.
Sometimes public sentiment is useful. And sometimes it’s only useful in demonstrating how little the general public understands some issues. It would appear that a new survey done by the Knight Foundation about how the internet giants should handle “news” content is one of the latter ones. While there’s lots of discussion about what the poll results “say,” the only thing they really say is that the public has no clue about how the internet and news works — and that should be the focus. We need much greater tech and media literacy. Unfortunately, the poll seems more likely to do the opposite.
There are two “headline” findings out of the report — and the fact that the two are almost entirely contradictory should have maybe been a warning sign:
Internet platforms shouldn’t make any effort to “customize” the newsfeed that you see and show the same thing to everyone
Internet platforms should be “subject to the same regulations as newspapers and TV stations.”
Let’s dig in a bit to the full survey. The first point is actually split into two separate parts, and the results aren’t unsurprising, but (again) really seem to demonstrate media and tech illiteracy more than anything else. First, they were asked if they think it was a good or bad idea for platforms to target news based on interests, and the breakdown here really isn’t that definitive in either direction. It’s pretty split:
This seems perfectly reasonable because the question is kind of… dumb? I mean, it really depends on what kind of service I’m using. If I’m using Twitter, which is a feed-based system where the entire point is to have an ongoing stream of everything the people/organizations I follow tweet, then, I’d be against targeting news, because that’s not what I use Twitter for. If it were something else, like Google News, where the entire point is to recommend news, then I’m a lot more open to it. So, the whole premise of this question seems silly. Really, the issue here is that people should know what kind of site they’re looking at, and whether it’s recommending news or giving you a firehose of news. That’s all. But this question doesn’t get at that, and I’m guessing that’s why the responses are so mixed.
But then there’s a question about whether platforms should “exclude” certain kinds of content. And we get the following responses:
And there’s a related question about how concerned people are about platforms excluding news:
Again, while these results can make headlines, they all seem kind of useless. Most people feel that platforms should exclude content that is “misinformation.” Well, duh. But that doesn’t tell us anything all that interesting, really. And then the latter results seem to conflict with that view, because it claims that the vast majority of people are concerned that platforms excluding news would give people a biased view and restrict expression and such. But… they want to exclude misinformation.
Basically, it all goes back to one of the key problems that we’ve had with the big debate on content moderation. Tons of people are for moderation of “bad” content, but they worry (correctly) that moderation done badly will do bad things. And they don’t trust the moderators from big tech companies. What does that really tell us? Not very much. Because all of these are conflating a bunch of different sites and different issues, such that anyone can basically cherry pick what they want out of it to try to support their own position.
Want to use this study to show that internet platforms should moderate content to get rid of “fake news” (without ever defining fake news)? Show the results that people want platforms to intervene there. Want to use this study to show that people don’t trust internet platforms to moderate at all? Show the other results. What useful points can be gleaned from this? So far, mainly just that this study is useless. So, you see stories about this study claiming that Americans think platforms should stop filtering news, even though only some of the study says that, and other parts say the exact opposite.
And then there’s another hidden tidbit that was the lead in some of the press coverage: that “Americans favor more regulation of internet sites.” Regulation of what and how? Well, again, this study fails completely in that it never actually says. The only bit on regulation is the following:
Seventy-nine percent of Americans strongly or somewhat
agree that major internet companies should be subject
to the same rules and regulations as newspapers and
broadcast news stations are. Twenty percent strongly or
That’s it. Literally, that’s it. There’s a little bit more of a discussion about the breakdown based on age, but there is no discussion of what the fuck this even means — because it means literally nothing. What “regulations” do newspapers and broadcast news face? Well, not much? But, it really kind of depends. Broadcast news may face some FCC regulations because they use the public airwaves. But newspapers don’t. And internet sites don’t. Because they don’t use the public airwaves. Other than that, they already face the same basic “rules and regulations.” So it’s not at all clear how — as a bunch of people have claimed — this study supports the idea for “increased” regulation of internet sites.
Honestly, this feels like a kind of push poll and it’s kind of shameful that the Knight Foundation and Gallup — both of which should know better — would do such a thing. After asking all these random amorphous meaningless questions about internet platforms, they then jump in with a question about regulating the platforms without defining or clarifying what regulations they’re even talking about, in an area where the vast majority of the public will have literally no idea what those limited regulations are? What good is that other than to just get people to say “sure, they should all be on an equal footing.”
About the only interesting tidbit I can find in the entire damn study is that those who are moderately technically savvy — defined as “very familiar with computer algorithms” — are marginally more inclined than those who are less tech savvy to say that end users should be responsible for finding accurate and unbiased news, rather than the internet platforms. But that’s really about it.
So, what are we left with? A weird survey that feels more like a push poll, that lets anyone take the info and make any argument they want. When the only conclusions that seem to really come out if it are: (1) this is a really awful and confusing survey, and (2) most people have no idea what regulations there are around news.
Let’s jump right into this, because this post is going to be a bit on the wonky side. It’s presidential silly season, as we have said before, and this iteration of it is particularly bad, like a dumpster fire that suddenly has a thousand gallons of gasoline dropped onto it from a crop-duster flown by a blind zombie. Which, of course, makes it quite fascinating to watch for those of us with an independent persuasion. Chiefly interesting for myself is watching how the polls shift and change with each landmark on this sad, sad journey. It makes poll aggregating groups, such as the excellent Project FiveThirtyEight, quite useful in getting a ten-thousand foot view as to how the public is reacting to the news of the day.
But sites like that obviously rely on individual polls in order to generate their aggregate outlooks, which makes understanding, at least at a high level, just how these political polls get their results interesting as well. And, if you watch these things like I do, you have probably been curious about one particular poll, the U.S.C. Dornsife/Los Angeles Times Daybreak poll, commonly shortened to the USC/LAT poll, which has consistently put out results on the Presidential race that differ significantly from other major polls. That difference has generally amounted to wider support for Donald Trump in the race, with specific differences in support for Trump among certain demographics. To the credit of those that run the poll, they have been exceptionally transparent about how they generate their numbers, which led the New York Times to dig in and try to figure out the reason for the skewed results. It seems an answer was found and it’s gloriously absurd.
There is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election. He is sure he is going to vote for Donald Trump. Despite falling behind by double digits in some national surveys, Mr. Trump has generally led in the USC/LAT poll. He held the lead for a full month until Wednesday, when Hillary Clinton took the nominal lead. Our Trump-supporting friend in Illinois is a surprisingly big part of the reason. In some polls, he’s weighted as much as 30 times more than the average respondent, and as much as 300 times more than the least-weighted respondent.
Alone, he has been enough to put Mr. Trump in double digits of support among black voters. He can improve Mr. Trump’s margin by 1 point in the national survey, even though he is one of around 3,000 panelists.
So, how does one person manage to skew a major national political poll in favor of one candidate to the tune of entire percentage points? Well, it turns out that a confluence of factors that include who is included on the poll and how often, how the poll respondents are weighted, and how this one particular voter fits into the demographic weighting converged to pretty much mess everything up. Let’s start with the weighting.
The USC/LAT poll does things a bit differently than the other national polls. All polls rate respondents by demographics to correct for voting tendencies. The math can get gory and the NYT post does a good job of going through it, but you can think of it like this, for a very imprecise example: a poll respondent from the 18-35 demographic will be weighted less than a respondent from the 36-55 demographic, because the latter demo is more likely to actually show up and vote than the former. There is indeed some subjectivity in this, but the large demographic weighting drives the error margin down for the most part. But the USC/LAT poll deviates from the large-demo weighting and instead weights at very small demographic levels.
The USC/LAT poll weights for many tiny categories: like 18-21 year old men, which the USC/LAT poll estimates make up around 3.3 percent of the adult citizen population. Weighting simply for 18-21 year olds would be pretty bold for a political survey; 18-21 year old men is really unusual…When you start considering the competing demands across multiple categories, it can quickly become necessary to give an astonishing amount of extra weight to particularly underrepresented voters — like 18-21 year old black men.
Which is how our single friend in Illinois became the poll’s most weighted voter, being a 19 year old black man. The heavy weighting on tiny demographic categories caught him several times and, since he is voting for Trump, despite his demographic generally not voting for Trump, his heavily-weighted response skews things wildly. But that isn’t all.
The USC/LAT poll does something else that’s really unusual: it weights the sample according to how people said they voted in the 2012 election. The big problem is that people don’t report their past vote very accurately. They tend to over-report three things: voting, voting for the winner and voting for some other candidate. They underreport voting for the loser. By emphasizing past vote, they might significantly underweight those who claim to have voted for Mr. Obama and give much more weight to people who say they didn’t vote.
Which, again, catches our friend from Illinois. At nineteen, he obviously didn’t vote in the last election. So his response is weighted even more. Using the poll’s own data, the New York Times re-ran the poll using the same broad categories most other major polls used. When done, Hillary Clinton led in every single one of the iterations except for the one immediately proceeding the GOP convention. The difference between the poll’s results as reported and what they would be with the normal weighted categories and the omission of the past vote weighting ranged form 1-4 points. In a political poll, that’s enormous.
The final factor here is that the USC/LAT poll is a panel poll, which means that the same respondents are used each time the poll is run. So, our young black trump-voting man from Illinois got to skew these results nearly each and every time. The one time he failed to respond to the poll, Hillary Clinton suddenly led within it. As the NYT notes:
The USC/LAT poll had terrible luck: the single most overweighted person in the survey was unrepresentative of his demographic group. The people running the poll basically got stuck at the extreme of the added variance.
And, of course, the poll aggregators might include this poll, skewing the aggregate numbers as well. This isn’t to say that all polls are skewed in the same manner. They aren’t. The reason this is a story is because this poll is the outlier. But it is kind of fun to see how badly the sausage can be made if the methodology isn’t in tune.
The folks over at Pew Research usually do pretty good work, but they decided to weigh in on the Apple / FBI backdoor debate by asking a really dumb poll question — the results of which are now being used to argue that the public supports the FBI over Apple by a pretty wide margin.
But, of course, as with everything in polling, the questions you ask and how you phrase them are pretty much everything. And here’s the thing. The question asked was:
As you may know, RANDOMIZE: [the FBI has said that accessing the iPhone is an important part of their ongoing investigation into the San Bernardino attacks] while [Apple has said that unlocking the iPhone could compromise the security of other users? information] do you think Apple [READ; RANDOMIZE]?
(1) Should unlock the iPhone (2) Should not unlock the iPhone (3) Don’t Know.
But that’s not the issue in this case!
As noted in the past, when it’s possible for Apple to get access to data, it has always done so in response to lawful court orders. That’s similar to almost every other company as well. This case is different because it’s not asking Apple to “unlock the iPhone.” The issue is that Apple cannot unlock the iPhone and thus, the FBI has instead gotten a court order to demand that Apple create an entirely new operating system that undermines the safety and security of iPhones, so that the FBI can hack into the iPhone. That’s a really different thing.
And it does a massive disservice by Pew to (1) ask the wrong question and then (2) make people think that the public supports the FBI’s view when Pew itself misrepresented the issues in the case in the first place. And of course, the mainstream media, like the Washington Post (who normally is better than this) puts out a bullshit story claiming that “Apple is fighting a war most Americans don’t believe in.” But that’s not what the poll actually says. You’d think that reporters might actually take the time to understand the story and the poll first, but apparently that’s too difficult, as compared to the easy, if misleading headline.
As Ed Snowden himself pointed out, this is nothing more than misinformation:
Pew poll finds when the government misinforms the public, the public is misinformed. Scientists baffled. https://t.co/8LcRg7ismw
But even that’s not entirely accurate. In this case, it really seems like the fault is with Pew for asking a misleading question over an issue that is not up for debate here. And the press is similarly at fault for running with it and appearing to not understand either. Yes, the government is partially responsible for being misleading, but in this case, I’d put them third in line behind Pew and reporters. If people were accurately informed and actually understood the real issues, the poll would likely be quite different. But Pew didn’t bother to understand the issue, and asked a questions that totally misrepresents the issue to the point that the results are actually completely meaningless to the ongoing debate.