Keep the following in mind as you listen to the Experts, Talking Heads, Politicians, and the Media.
A statistical hypothesis test of a drug’s efficacy with an insignificant result, i.e., that does not reject the null hypothesis that the drug has no beneficial effect, does not imply that the drug has no beneficial effect. In fact, any observed beneficial effect makes it more likely than not that it does have a beneficial effect (principal of maximum likelihood).
A statistical hypothesis test of a drug’s efficacy with a highly significant result, i.e. it strongly rejects the null hypothesis that the drug has no beneficial effect, does not imply that the drug has a worthwhile beneficial effect. In fact, it may have no beneficial effect, a harmful effect, or a negligible beneficial effect.
If a drug’s hypothesis test shows a statistically insignificant but beneficial effect, then it is fair to view it as more likely than not that it has a beneficial effect (principle of maximum likelihood).
Which is more believable: A large sample statistical test with statistical size x% that rejects the null hypothesis or a small sample statistical test with statistical size x% that rejects the null hypothesis? Answer: They are equivalent. The fact that the former has more power is irrelevant to the null hypothesis, where the “power” is identical.
Statistical significance is not the same as clinical significance, i.e., the magnitude of the beneficial effect.
Standard hypothesis tests do not take into account a priori information. I have not run across a standard statistical test of the direction an object will “fall” (up or down) when released from the hand. Nevertheless, I am confident that it will fall down. Waiting for a formal statistical test can cost lives.
No comments:
Post a Comment