Does Being More Vocal In Video Game Violence Debate Mean You Have The Better Argument?
from the quantity-vs.-quality? dept
A few folks sent over the news of some really bizarre research done by Brad Bushman and Craig Anderson on the question of whether or not violent video games harm teens. First of all, the research is already somewhat suspect, in that Anderson has a long history of claims about how violent video games must harm children based on questionable data. This new report is based on such questionable and loop methodology, you almost wonder why they even bothered.
What they did was take the amici briefs from the Supreme Court case concerning California’s anti-violent video game law, and run some numbers on who wrote the briefs and how many “published” studies they had. And that was how they determined which one was more credible. I’m not kidding. Quantity over quality:
The researchers analyzed the credentials of the 115 people who signed the Gruel brief, who believe video violence is harmful, and the 82 signers of the Millett brief, who believe video violence is not harmful. (The briefs are named after the lead attorneys for each side.)
The data for the study came from the PsycINFO database, which provides more than 3 million references to the psychological literature from the 1800s to the present, including peer-reviewed journal articles, book chapters or essays, and books.
For each of the signers of the two briefs, the researchers calculated how many articles and books they published on issues relating to violence and aggression in general and on media violence specifically.
The results showed that 60 percent of the Gruel brief signers (who believe video game violence is harmful) have published at least one scientific study on aggression or violence in general, compared to only 17 percent of the Millett brief signers.
Moreover, when the researchers looked specifically at the subject of media violence, 37 percent of Gruel brief signers have published at least one study in that area, compared to just 13 percent of the Millett brief signers.
And they claim that this is “a very objective approach.” It’s also a profoundly meaningless approach. In case you didn’t follow it, there were a ton of amici briefs filed by various parties in this case. This study picked just two of the briefs. The first one (pdf) filed by California State Senator Leland Yee (whom, I believe, may have written the legislation in question), the California Chapter of the American Academy of Pediatrics and the California Psychological Association. That brief supports California’s position in the case. The second one (pdf) is a brief from “social scientists, medical scientists and media effects scholars,” which is the one that supports the other side, saying that the law isn’t constitutional. You can read the two briefs that I linked to above, and you can judge the relative merits of both.
But that’s not what Bushman and Anderson did. They simply took the signers of each brief and measured how many of them have published studies on this specific question. Of course, that’s a meaningless and arbitrary number, especially when presented in percentages. Based on this methodology, it would mean that if only one person signed the amicus brief, but had published research, then that one would clearly be the most credible, since 100% of the signers would have published. Obviously, that makes no sense.
Now, Bushman and Anderson — clearly expecting the quantity over quality issue to make for easy mockery of such a ridiculous study — also added a second element to try to show “quality” as well:
In a further analysis, Bushman and Anderson examined where the signers of both briefs have published their research. The best academic journals have the highest standards and the most rigorous peer review, so only the best research should be published there, Bushman said.
The researchers used a well-established formula, called the impact factor, to determine the top-tier journals, and then calculated how many signers had published in these journals.
Results showed that signers of the Gruel brief had published over 48 times more studies in top-tier journals than did those who signed the Millett brief.
But, again, this attempt at showing “quality” is really a “quantity” study in disguise. It’s not looking at the actual credibility of any of the studies, but trying to create an aggregate (but meaningless) number. And, again, the entire basis of this result is a meaningless dataset. I’m really wondering who would possibly read this and think that the results are credible.
Oh yeah, and one final point. Guess what two academics signed that first brief? You guessed it: Craig Anderson and Brad Bushman. Talk about researcher objectivity huh? They create a bogus methodology to try to “prove” that the brief they signed is more credible than someone else’s brief? Honestly, when they present methodology like this, it serves mostly to raise questions about their methodology on any other study as well. They’ve made it clear that they’re not researching the truth. They’re starting with an established position and trying to figure out ways to present evidence to support that. That’s not science.