Simon
Senior Member
- Messages
- 3,789
- Location
- Monmouth, UK
You've probably heard that something is amiss with life science research and that many scientists are beginning to question how robust scientific literature really is, eg Why Most Published Research Findings are False.
One big reason that findings appear real, but are false, is the 'gold standard' of assessing research, using p values - usually if p<0.05 then the tested hypothesis is treated as true with 95% confidence. Hurrah!
Except that's not the case. Read on, as this is all based on sound maths, not some weird theory.
This recent blog explains how even with p<0.05, a hypothesis might still be more likely to be wrong than right, especially if that finding is surprising. Given that research journals like to publish unexpected, surprising findings, we should be worried:
Daniel Lakens: Prior probabilities and replicating 'surprising and unexpected' effects
Using Bayesian probability (see below re betting on horses) Lakens shows that if both the new, tested hypothesis (your shiny new idea) and the null hypothesis (nothing doing) are equally likely before the study is done, then with a result of p=0.04 (success), the chances of the new hypothesis really being right is still only 73%. (not 96%)
Let's say the result is of the surprising sort that journals like to trumpet. If it's surprising it must be unlikely, perhaps very unlikely, but lets say for simplicity the new hypotheses is 25% likely (before the evidence from the experiment), while the null hypothesis is 75% likely. Obviously you can never compute an exact probability of being right, but we know it must be under 50% or it isn't surprising.
In this scenario, if the new hypothesis is only 25% likely before the study, and results are p=0.04 as before, then the possibility the new hypothesis is really right is still only 49% (at best) slightly more likely to be wrong that right.
Read the blog to find out more, read below for a dummy's guide (this dummy) to Bayesian probability.
Bayesian Probability
The good news is that although Bayesian probability sounds scary, it's not that hard to grasp, especially if you like to bet on horses. This is the basic idea:
Using the example of betting on a 2-horse race - Dogmeat vs FleetFoot, how would you bet if you knew that Fleetfoot had won 7 out of their 12 previous races? Easy - put your money on Fleetfoot. But what if you knew that Dogmeat won 3 out of 4 races in the rain, and it's raining now? Well, based on past performance, Dogmeat has a 75% chance of winning, and if it's raining you should bet on Dogmeat, not Fleetfoot.
That is why Bayesian probability matters - it looks at conditional probability, in this case who wins IF it's raining. In research, the question is how likely is this research true, given it likelihood of being true before the experiment.
Of course, the precise probability of it being right before the study is unknown, but if the finding is surprising we know the probability of it being right is less than 50%. And if that's the case, a p=0.04 does NOT mean a 4% chance of being wrong/96% chance of being right.
One big reason that findings appear real, but are false, is the 'gold standard' of assessing research, using p values - usually if p<0.05 then the tested hypothesis is treated as true with 95% confidence. Hurrah!
Except that's not the case. Read on, as this is all based on sound maths, not some weird theory.
This recent blog explains how even with p<0.05, a hypothesis might still be more likely to be wrong than right, especially if that finding is surprising. Given that research journals like to publish unexpected, surprising findings, we should be worried:
Daniel Lakens: Prior probabilities and replicating 'surprising and unexpected' effects
Using Bayesian probability (see below re betting on horses) Lakens shows that if both the new, tested hypothesis (your shiny new idea) and the null hypothesis (nothing doing) are equally likely before the study is done, then with a result of p=0.04 (success), the chances of the new hypothesis really being right is still only 73%. (not 96%)
Let's say the result is of the surprising sort that journals like to trumpet. If it's surprising it must be unlikely, perhaps very unlikely, but lets say for simplicity the new hypotheses is 25% likely (before the evidence from the experiment), while the null hypothesis is 75% likely. Obviously you can never compute an exact probability of being right, but we know it must be under 50% or it isn't surprising.
In this scenario, if the new hypothesis is only 25% likely before the study, and results are p=0.04 as before, then the possibility the new hypothesis is really right is still only 49% (at best) slightly more likely to be wrong that right.
Read the blog to find out more, read below for a dummy's guide (this dummy) to Bayesian probability.
Bayesian Probability
The good news is that although Bayesian probability sounds scary, it's not that hard to grasp, especially if you like to bet on horses. This is the basic idea:
Using the example of betting on a 2-horse race - Dogmeat vs FleetFoot, how would you bet if you knew that Fleetfoot had won 7 out of their 12 previous races? Easy - put your money on Fleetfoot. But what if you knew that Dogmeat won 3 out of 4 races in the rain, and it's raining now? Well, based on past performance, Dogmeat has a 75% chance of winning, and if it's raining you should bet on Dogmeat, not Fleetfoot.
That is why Bayesian probability matters - it looks at conditional probability, in this case who wins IF it's raining. In research, the question is how likely is this research true, given it likelihood of being true before the experiment.
Of course, the precise probability of it being right before the study is unknown, but if the finding is surprising we know the probability of it being right is less than 50%. And if that's the case, a p=0.04 does NOT mean a 4% chance of being wrong/96% chance of being right.
Last edited: