Dubious Studies And Easy Headlines: No, A New Report Does Not Clearly Show Facebook Leads To Hate Crimes

from the it-ain't-that-simple dept

You may have seen a big story from the NY Times making the rounds this week, entitled Facebook Fueled Anti-Refugee Attacks in Germany, New Research Suggests. For fairly obvious reasons, lots of people are interested in this story. There’s plenty of concern lately about how social media is impacting our lives, and clear evidence of it leading to attacks on refugees would certainly be useful. The NY Times, in ways that the NY Times does best, is somewhat breathless in propping up the claims in the report.

One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community ? big city or small town; affluent or struggling; liberal haven or far-right stronghold ? suggesting that the link applies universally.

Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent.

Nationwide, the researchers estimated in an interview, this effect drove one-tenth of all anti-refugee violence.

The uptick in violence did not correlate with general web use or other related factors; this was not about the internet as an open platform for mobilization or communication. It was particular to Facebook.

Those are some fairly bold claims, and certainly worth exploring. However, it’s not exactly clear that the paper actually can support such claims. You can download a copy of the 75 page paper yourself, entitled Fanning the Flames of Hate: Social Media and Hate Crime by two PhD students, Karsten Müller and Carlo Schwarz, both from the University of Warwick. For what it’s worth, people have pointed out that this paper has not yet been peer reviewed, and an earlier version of this paper got some less breathless press coverage a few months ago. But, the NY Times is the NY Times.

The paper definitely presents some interesting data, and it should be applauded that researchers are exploring these issues — though separating out the actual causal variables seems like a difficult task. The researchers do appear to have fairly thorough data on anti-refugee attacks throughout Germany. The Facebook data, however, seems a lot less solid. A few people have been breaking down the problems with the study online, including Jonas Kaiser, Dean Eckles and Hal Hodson, who all convincingly argue that the NY Times is overplaying what the study actually shows.

Before I dig in a bit, I should note that part of the problem here is that the necessary Facebook data to do this kind of study is hard to come by. Earlier this year, Facebook announced that it would be giving some academics access to data in order to do just this kind of research (though more focused on election impact, but this should be similar). And, it would be damn helpful if Facebook were willing to give out the kind of data needed in order to actually do the kind of study that was presented in this paper.

But, so far, it has not. And that meant these two students had to try to create proxies. And… there are some questions about how they did that. To be fair, the researchers do try a variety of methods and use a variety of tools to see if they can tease out problems with their data or alternative variables that may be driving the results — and thus the research is not just cherry picking data points. However, it does rely on a some fundamental assumptions, and those assumptions may not be valid. Specifically, in an attempt to measure “anti-refugee hate speech,” they focus on one particular page: that of the relatively new, extreme anti-refugee political party Alternative for Germany, referred to as AfD.

We create a measure for the salience of anti-refugee hate speech on social media based on the Facebook page of the “Alternative für Deutschland” (Alternative for Germany, AfD hereafter), a relatively new right-wing party that became the third-strongest faction in the German parliament following the 2017 federal election. The AfD has positioned itself as an anti-refugee and anti-immigration party; with more than 300,000 likes, it also has more followers than any other German party on Facebook (see Appendix A for a history of the AfD). This widespread reach makes the AfD’s Facebook page uniquely suited to measure anti-refugee sentiment on social media.


We measure the exposure of a municipality to Germany-wide anti-refugee sentiment using the share of the population that is active on the AfD Facebook page. Using this proxy for social media usage, we found that anti-refugee hate crimes disproportionally increase in areas with higher Facebook usage during periods of high anti-refugee salience. This effect is especially pronounced for violent incidents against refugees, such as arson and assault. Taken at face value, this suggests a role for social media in the transmission of online hate speech into violent crimes.

And then there’s… this. Rather than measure overall Facebook usage in Germany (again, here’s where Facebook could be of assistance), the researchers measured… Facebook likes of the “Nutella Germany” page (for what it’s worth, Kaiser claims there is no Nutella Germany page). Why Nutella? Your guess is as good as mine. The researchers sort of brush past why that’s the correct metric:

We further probe the social media channel in additional tests. We show that our results also hold for measures of general Facebook usage that are independent of support for the AfD. In particular, we create proxies for municipality-level Facebook usage by drawing on the number of users on the “Nutella Germany” page. With over 32 million likes, Nutella has one of the most popular Facebook pages in Germany and therefore provides a measure of general Facebook media use at the municipality level. We also use the Nutella data to create a dummy for municipalities with many Nutella users within a county. This measure is orthogonal to a plethora of observable characteristics, most importantly general internet usage, voting patterns, education, pull-factors” such as immigration and religious composition, and proxies of xenophobic attitudes. We show that municipalities with many Nutella users per capita also experience more anti-refugee incidents in times of high refugee salience on the AfD page { unless access to social media is disrupted by internet or Facebook outages.

There’s a lot of jargon in there, but one key point to pull out: the research shows that cities with more Nutella likers per capita experience more anti-refugee incidents. Right. So forget that they used AfD’s website as a proxy, and recognize that their study might also suggest that eating Nutella leads to anti-refugee violence and… it might call into question some of the usefulness of the study.

There are some other concerns as well. MIT professor and statistician Dean Eckles notes a huge red flag: all of the “robustness” checks in the paper support their thesis. He notes that a good research paper should include robustness tests that push the analysis past the breaking point to see where their theory breaks down. But that does not happen with this paper:

A separate issue is that the eye-opening stat that the NY Times provides is that line “wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent.” However, a number of people are suggesting that this number is not correct and presented misleadingly. First, apparently while an earlier version of the paper had 50%, the latest version revised that down to 35%. That’s a pretty big difference — and one that the NY Times should correct. Second, the fact that the paper would shift so dramatically on that number between revisions also raises some red flags. Finally, as science writer Ferris Jabr points out in researching this number, there are a lot of assumptions built into this result, that are not necessarily well supported.

In short, there are a lot of concerns with the conclusions of the paper, and as there are huge questions about the assumptions used to make this point, as well as the potential quality of the overall paper, it seems absolutely crazy for the NY Times to run a big article suggesting that these results are solid, and without raising any of the concerns.

That’s not to say that a more thorough study wouldn’t find the same thing. It might. And, again, this is the kind of thing where it would be super helpful if Facebook were willing to release the necessary data, so no one would have to fiddle around with German Nutella fans. But, at the very least, we shouldn’t be going around making policy based on a draft paper that hasn’t carefully been vetted. And, yet, there are already some reports claiming that the German government is discussing this study, almost certainly based on the NY Times reporting, rather than a critical analysis of the study itself.

There are huge accusations in this study, and the NY Times has already stretched them much further than they probably should have been stretched — and without presenting much of the context or concerns about the assumptions. Before governments go rushing off to try to “do something” about this, perhaps we should take a step back and look for a bit more thorough and tested research.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Dubious Studies And Easy Headlines: No, A New Report Does Not Clearly Show Facebook Leads To Hate Crimes”

Subscribe: RSS Leave a comment
Anonymous Coward says:

Oh look!

First it was Books, BURN THE BOOKS
Then it was music, BURN THE MUSIC
Then it was video, BURN THE VIDEOs
Then it was board games, BURN The BOARD GAMES
Then it was video games, BURN THE VIDEO GAMES
Now its new, BURN THE NEWS!

We are going to wake up one day and wonder how the destroy our own freedom and never realize that it was out own hate that did it. We hated the extremists, so we weakened liberty to get at them, we hated those in the other party and equated them to the extremists, so we weakened liberty to get at them, then we hated on anyone not regurgitating the “approved platform” so we weakened liberty to get at them. Now, then there is no one left to get out, they are coming for me because I have a voice at all which might be used to create something they hate!

John Smith says:

Section 230 is what leads to hate crimes, invasion of privacy, ruining reputations and careers, and shifting the costs to the government when people lose jobs and housing. Without Section 230 this wouldn’t happen.

The ability to search for people by name is at the root of this. Facebook is a small part of this problem but Section 230 is the true cancer. White-colar professionals are now targeted for “reputation exortion” by Russians and others, who threaten to post negative reviews if the money is not paid. None of this helps serve the public interest.

Why do you think my name is John Smith? 🙂

Anonymous Coward says:

Re: Re: "But, the NY Times is the NY Times." -- Your source is in US!

So why this stupid one-liner attempting to trivially and irrelevantly divert? Can’t people refer to similar US matters? Or are we supposed to adhere STRICTLY to your focus so that you turn out right? — And does not Germany / Europe have similar “safe harbor”?

Sheesh. You TOO have only one-liners in response to reasonable points.

Mike Masnick (profile) says:

Re: Re: Re: "But, the NY Times is the NY Times." -- Your source is in US!

So why this stupid one-liner attempting to trivially and irrelevantly divert?

Not trying to divert. He blamed all of this on CDA 230, yet the story is entirely about things happening in Germany. So it seemed relevant to point out that Germany does not have CDA 230.

Can’t people refer to similar US matters?

Of course they can, if it were relevant to the story. But it seems odd to blame things happening in Germany on a US law that does not extend beyond US borders.

Or are we supposed to adhere STRICTLY to your focus so that you turn out right?

No. But it seems reasonable that if you make an argument that it actually apply to the story at hand. I get that that doesn’t always make for good trolling, so perhaps it’s not acceptable to you, but that’s because you’re not very good as a troll.

And does not Germany / Europe have similar "safe harbor"?

They have the e-commerce directive’s safe harbors, which are functionally different than CDA 230 and applied differently than CDA 230, especially in Germany, which (again, I remind you) is what this story is about.

In sum, blaming stuff in Germany on CDA 230 is silly. Your comment is also silly, but you know that already, don’t you?

Christenson says:

Re: Re:

Or just maybe, the correlation is a confounder to a single underlying cause, like, (looks shocked)

Being poor and desperate! (and therefore also using a lot of facebook and also eating more nutella than they should)

Meanwhile, <sarc> cue the mainstream moral panic of the day ! Facebook! (or was that Twitter?) </sarc>

Anonymous Anonymous Coward (profile) says:

Arguments for open peer reviews:

and quantity and quality of peer reviews.

When I read the headline in the Crystal Ball, I severely wanted to write a sarcastic comment about getting someone to write an algorithm that would block all ‘dubious studies’. Of course, such an algorithm could not exist, or would not actually work, just as algorithms that detect copyright infringement exist, but don’t actually work.

So the question comes up, how do we detect and obfuscate dubious studies? One method is pointed out in the article, this study was not peer reviewed. But that points to yet another problem, peer reviews can be gamed by study/paper publishers. They want to publish (and charge for anyone using that paper) more, not more quality. So peer review is an indicator, but more investigation needs to be done. Who were the reviewers, and how were they recruited, and by whom?

Then we should take a close look at how many peer reviews took place. Then we should look at who those reviewers were, and what prejudices there might be. Are they a competitor and just being obstinate? Do they have a different theory they want promoted? Could both be right and more study is needed to excise the differences and come up with a more appropriate conclusion?

In the end, before ‘studies’ or ‘papers’ are taken as qualified, they should be subjected to a quantifiable, and qualifiable set of tests that determine whether any regard for the study should be taken. Without such a ‘Study Reliability Score’ (SRS for those that insist upon an acronyms) papers and studies should not be taken for granted.

One problems comes up. Currently paper publishers don’t pay for anything except printing, so we aren’t going to get an SRS from them, but there would be a cost to obtaining an SRS. How do we go about that?

After all that, there is the issue of getting those in power (Congress, government in general, etc.) to use only SRS approved studies.

Anonymous Coward says:

Re: Arguments for open peer reviews:

Then we should take a close look at how many peer reviews took place.

This isn’t really a good metric, as for most scientific fields the number of peer reviews is almost only dependent on the number of journals it was submitted to before being accepted. This number is lowest for studies which are 1) likely to be contenders for a Nobel/Lasker/etc. prize at some point (i.e. very significant advancements) 2) well performed studies with limited scope or impact outside of their field or 3) complete garbage. This number is highest for scientifically significant, well researched studies with cross-field impact.

So if we use higher numbers of peer reviews to do this, we’ll miss the most significant studies, and the ones most relevant to specific departmental and regulatory questions. If we use lower numbers of peer reviews, we’ll miss the ones most important to broader questions of overarching government policy.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...