DailyDirt: Correlation Is Not Causation
from the urls-we-dig-up dept
Big data is a term that’s been getting some buzz as the next thing that’s going to change everyone’s lives (for better or worse, depending on how you look at it). Having a lot of data doesn’t necessarily mean you also have a lot of useful knowledge. Garbage in, garbage out, so they say. And making correlations is easy compared to finding a direct causal relationship. However, that hasn’t stopped (so-called) journalists from writing misleading headlines. If you hate correlations being mistaken for causation, submit examples you’ve seen in the comments below. Here are just a few to start off.
- Likes for curly fries on Facebook might correlate with high IQ scores, but don’t click that like button just yet. Maybe there are more social experiments being performed on Facebook users than can be accurately counted. [url]
- Former high school athletes seem to get higher paying jobs (at least for the self-reporting men in this study). A lot of skills correlate with various forms of success. Perhaps enjoying the things you do (learned skill or not) is a reward unto itself. [url]
- Measuring the size of brain features can correlate with all kinds of activities, and people have been trying to measure brain sizes for a long time… because there are instruments that can measure the size of various brain parts. The interpretation of these measurements can lead to a lot of faulty conclusions. However, you won’t often see the headline: “Watching moderate amounts of porn won?t hurt your brain.” [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: big data, brain, causation, correlation, gigo, iq, journalism, pet peeves, success
Comments on “DailyDirt: Correlation Is Not Causation”
“You may be interested to know that global warming, earthquakes, hurricanes, and other natural disasters are a direct effect of the shrinking numbers of Pirates since the 1800s.”
Post doesn’t count if you weren’t wearing your colander while writing it. Can you please supply evidence?
Re: Re: Re:
Sorry – I was in full Pirate Regalia
I got one!
People claiming that the MMR causes Autism just because the vaccine is given around the time that late-oneset Autism first appears.
Go to news.google.com and click the health section. I guarantee you’ll find correlation articles. I just did and apparently dark chocolate (not milk chocolate) helps people with peripheral artery disease walk. Doesn’t matter that they only tested on 20 people or that the change was only 11%.
If the study controls for reasonable factors and the 20 subjects were validly random, then it could be legit – at least as an initial study.
The more subjects a study needs to prove a point, the less you should trust the results. Psychiatric drug field studies routinely twiddle the numbers to get the results they want by combining samples from beneficial (lucky?) studies into cohorts that don’t exhibit any positive reponse to show that on average, patients from the two cohorts show a positive response!
Citation: http://www.youtube.com/watch?v=A3YB59EKMKw I think. I’m at work now so can’t confirm, but I’m pretty sure that’s the one.
Re: Re: Re:
“The more subjects a study needs to prove a point, the less you should trust the results.”
“Needs to prove a point”, implies this is not science, but rather a marketing ploy.
In a well constructed experiment or “study”, as the sample size increases so does the precision.
Re: Re: Re: Re:
Well, no. In a well constructed experiment, precision will remain constant regardless of the sample size, but the resolution of the findings may be different.
eg- assuming correct randomisation and good controls across all sample sizes, a study of 20 subjects with no negative outcomes means you can confidently state a rate of “less than 1 in 20”. Take it up to 20,000 subjects, you might find 100 negative outcomes relative to control, meaning you can refine your rate of
Re: Re: Re:2 Re:
ate my comment!
meaning you can refine your rate of
Re: Re: Re:2 Re:
meaning you can refine your rate of less than 0.05 to less than 0.01 (or lower? My statistics-fu is weak)
Of course, neither study proves that the rate across the entire population isn’t really 0.5, but that’s what randomisation is supposed to (try to) address. Alternatively, even if the rate across the study population is accurate, it can be difficult to determine if a particular person might fit into that population or not.
Re: Re: Re:2 Re:
“precision will remain constant regardless of the sample size, but the resolution of the findings may be different”
– This is incorrect. You assume the sample size quantity exceeds the quantity of possible unique results. When the aforementioned is not the case, increased resolution would only provide more detail of an incomplete data set.
“assuming correct randomisation”
– This is an attempt at simplifying the problem, as clearly there is no such thing as “correct randomization”
So What If Correlation Is Not Causation?
How does that cause you to conclude anything?
1) An increase of global surveillance since 9/11 by the NSA correlates with a decrease in terrorist attacks killing more than 2,500.
Conclusion: surveillance works, so we should do more.
2) An increase of global surveillance since 9/11 by the NSA correlates with an increase in global terrorist activity.
Conclusion: surveillance would work if we could do more.
Of course, #2’s predicate might actually involve legitimate causation…