Logo

Wednesday, October 24, 2012

Significant Other


What’s significant in your life?

Regular person: my partner, my job, my friends, etc.

Scientist: hopefully my data

In everyday use, significance means that something is important and meaningful.

For scientists, significance has a very specific definition, which is referred to as statistical significance (http://en.wikipedia.org/wiki/Statistical_significance).

Basically, if I have a result, what are the odds my result occurred from some specific factor versus the odds that my result just occurred by chance. If I'm confident my result came from some effect, then the result is considered significant.



Sympathy Card for a Scientist

For this post, our example is going to be the lovely topic of food poisoning.

If I go to a restaurant and get food poisoning once, is this significant? Maybe, but it could be due to one bad apple or piece of produce or something else.

If five people eat there and get sick, this is probably significant.
The food poisoning is probably not happening randomly. Maybe the kitchen is buying rotten food because it’s cheaper, etc.

So the question is: how confident does the health inspector have to be about the food poisoning in order for this to be a significant result?

(Un)Fortunately, there is an entire field called significance testing dedicated to this very question (http://en.wikipedia.org/wiki/Statistical_hypothesis_testing).

The idea is that you set some arbitrary significance level before you run your tests. In this case, the health inspector might say: “I want to be 95% sure that these 5 people did get food poisoning from this restaurant.”

Then the health inspector runs his tests and one of two things happen:
1)      He’s above 95% confident so the result is significant, and thus the restaurant is shut down
2)      He’s below 95% confident so the restaurant stays open.

Now, if you are reasonably intelligent as I assume most of my readers are, then you will have noticed a couple of problems with this procedure.

Issue 1: First of all, where did this number 95% come from? Who decided on that?

For this blog post, I made it up, but common levels in the scientific world are 90%, 95%, and 99%.

If the health inspector tells me the food poisoning isn't significant because he’s only 94% sure the restaurant gave me food poisoning, I’m probably still not going to eat there.

So where do we draw the line?

Unfortunately, there is no easy solution to this problem. Journals and the science community have come up with their own standards, but these numbers are still widely debated.

Personally, I would prefer if I was given the actual number and the significance level so that I could judge for myself.

However, many articles only report whether a result is significant or not, leaving the actual number hidden away in the Methods section.

Issue 2: We can never be 100% sure about a result.

With our food poisoning restaurant, even if the health inspector is 95% sure the restaurant did cause the food poisoning; there is still a 5% chance that the restaurant isn't poisoning everyone!

But if your result is not significant, it’s extremely difficult to get published.
Journals have a strong bias towards publishing significant results (as they should in most cases), but maybe non-significant results deserve a place to go to?

One such home for these articles is the Journal for Articles in Support of the Null Hypothesis, http://www.jasnh.com/. This journal only publishes results that are not considered significant.


So is significance testing inherently bad?

No. The problems occur when people just rely on what the tests tell them and forget that significance levels come with a number of assumptions and caveats.

To avoid this fate, keep your skepticism handy and draw your own conclusions from the data.



No comments:

Post a Comment