DailyDirt: Correlation Is Not Causation

from the urls-we-dig-up dept

Big data is a term that’s been getting some buzz as the next thing that’s going to change everyone’s lives (for better or worse, depending on how you look at it). Having a lot of data doesn’t necessarily mean you also have a lot of useful knowledge. Garbage in, garbage out, so they say. And making correlations is easy compared to finding a direct causal relationship. However, that hasn’t stopped (so-called) journalists from writing misleading headlines. If you hate correlations being mistaken for causation, submit examples you’ve seen in the comments below. Here are just a few to start off.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: , , , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “DailyDirt: Correlation Is Not Causation”

Subscribe: RSS Leave a comment
14 Comments
Anonymous Coward says:

Re: Re:

If the study controls for reasonable factors and the 20 subjects were validly random, then it could be legit – at least as an initial study.

The more subjects a study needs to prove a point, the less you should trust the results. Psychiatric drug field studies routinely twiddle the numbers to get the results they want by combining samples from beneficial (lucky?) studies into cohorts that don’t exhibit any positive reponse to show that on average, patients from the two cohorts show a positive response!

Citation: http://www.youtube.com/watch?v=A3YB59EKMKw I think. I’m at work now so can’t confirm, but I’m pretty sure that’s the one.

Anonymous Coward says:

Re: Re: Re:

“The more subjects a study needs to prove a point, the less you should trust the results.”

“Needs to prove a point”, implies this is not science, but rather a marketing ploy.

In a well constructed experiment or “study”, as the sample size increases so does the precision.

Anonymous Coward says:

Re: Re: Re: Re:

“Needs to prove a point”, implies this is not science, but rather a marketing ploy.

Well, yes.

In a well constructed experiment or “study”, as the sample size increases so does the precision.

Well, no. In a well constructed experiment, precision will remain constant regardless of the sample size, but the resolution of the findings may be different.

eg- assuming correct randomisation and good controls across all sample sizes, a study of 20 subjects with no negative outcomes means you can confidently state a rate of “less than 1 in 20”. Take it up to 20,000 subjects, you might find 100 negative outcomes relative to control, meaning you can refine your rate of

Anonymous Coward says:

Re: Re: Re:2 Re:

take 2…

meaning you can refine your rate of less than 0.05 to less than 0.01 (or lower? My statistics-fu is weak)

Of course, neither study proves that the rate across the entire population isn’t really 0.5, but that’s what randomisation is supposed to (try to) address. Alternatively, even if the rate across the study population is accurate, it can be difficult to determine if a particular person might fit into that population or not.

Anonymous Coward says:

Re: Re: Re:2 Re:

“precision will remain constant regardless of the sample size, but the resolution of the findings may be different”
– This is incorrect. You assume the sample size quantity exceeds the quantity of possible unique results. When the aforementioned is not the case, increased resolution would only provide more detail of an incomplete data set.

“assuming correct randomisation”
– This is an attempt at simplifying the problem, as clearly there is no such thing as “correct randomization”

Anonymous Coward says:

1) An increase of global surveillance since 9/11 by the NSA correlates with a decrease in terrorist attacks killing more than 2,500.
Conclusion: surveillance works, so we should do more.

2) An increase of global surveillance since 9/11 by the NSA correlates with an increase in global terrorist activity.
Conclusion: surveillance would work if we could do more.

Of course, #2’s predicate might actually involve legitimate causation…

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...