I thought this looked pretty interesting, and hats off to psychologist Brian Nosek of the Center for Open Science for setting up this study. It's not as if psychology is the only field with a problem: there have been failed replication attempts in
Cancer Biology and
Drug Discovery, and there are precious few replications amongst the hundreds of biomedical mecfs findings.
On to the resuts of the replication attempts of 100 different studies published in 3 different pyschology journals in 2008:
- 39% of findings were replicated
- Up to 63% if you move the goal posts a bit, though that's still hardly compelling:
Of the 61 non-replicated studies, scientists classed 24 as producing findings at least “moderately similar” to those of the original experiments, even though they did not meet pre-established criteria, such as statistical significance, that would count as a successful replication.
Broadly, the results support John Ioannidis's claim that
"Most Published Research Findings Are False".
Anyway, these resullts are provisional: a paper is currently under review at the prestigious journal
Science.
The 'near misses':
Rigid, all-or-nothing categories are not useful in such situations, says Greg Hajcak, a clinical psychologist at Stony Brook University in New York.
Hajcak authored one study that could not be reproduced, but for which replicators said they found “extremely similar” results that did not reach statistical significance.
I think there might be something in this*. One possible interpretation is that the effect is real, but so small it is hard to detect, so some studies find the pattern without reaching significance. So it may be that the result first reported is 'real'; but not worth getting excited about because it's not a big deal - which again is useful information.
Above all, work like this (
Reproducibility Project: Psychology) might help clean up the literature so that researchers can focus on findings that are both real and big enough to bother with.
*on the other hand, most researchers would assume the published result meant that if they tried to replicate the study they would get a significant result again - and that didn't happen.
Original article: First results from psychology’s largest reproducibility test : Nature News & Comment