Discussion in 'Other Health News and Research' started by Dolphin, May 3, 2015.
Even if a finding is replicated, it doesn't mean the interpretation is correct.
For example in the PACE trial, if a person answered "yes" when asked if they feared worsening of symptoms following exertion, they would be labeled as having exercise phobia because that is the interpretation by the authors.
There are many reasons why someone could answer this questions with a yes, and discarding all other possible explanations in favor of a single one is a leap of faith and not science.
I thought this looked pretty interesting, and hats off to psychologist Brian Nosek of the Center for Open Science for setting up this study. It's not as if psychology is the only field with a problem: there have been failed replication attempts in Cancer Biology and Drug Discovery, and there are precious few replications amongst the hundreds of biomedical mecfs findings.
On to the resuts of the replication attempts of 100 different studies published in 3 different pyschology journals in 2008:
39% of findings were replicated
Up to 63% if you move the goal posts a bit, though that's still hardly compelling:
Broadly, the results support John Ioannidis's claim that "Most Published Research Findings Are False".
Anyway, these resullts are provisional: a paper is currently under review at the prestigious journal Science.
The 'near misses':
I think there might be something in this*. One possible interpretation is that the effect is real, but so small it is hard to detect, so some studies find the pattern without reaching significance. So it may be that the result first reported is 'real'; but not worth getting excited about because it's not a big deal - which again is useful information.
Above all, work like this (Reproducibility Project: Psychology) might help clean up the literature so that researchers can focus on findings that are both real and big enough to bother with.
*on the other hand, most researchers would assume the published result meant that if they tried to replicate the study they would get a significant result again - and that didn't happen.
Original article: First results from psychology’s largest reproducibility test : Nature News & Comment
This has now been published in Science, with a readable report about it in Nature:
Over half of psychology studies fail reproducibility test : Nature News & Comment
The first hard evidence to support John Ioannidis's claim that "Most Published Research Findings Are False".
Crucially, this isn't a one-off study, but a systematic replication of 100 studies drawn from 3 different journals
Whereas 97% of the original studies found a significant effect, only 36% of replication studies found significant results.
The team also found that the average size of the effects found in the replicated studies was only half that reported in the original studies. "The mean effect size (r) of the replication effects (Mr = 0.197) was half the magnitude of the mean effect size of the original effects (Mr = 0.403)". An effect size of 0.2 or less is usually regarded as trivial.
Commenting on the quality of this new replication study, Andrew Gelman, a statistician at Columbia University said. “This is empirical evidence, not a theoretical argument. The value of this project is that hopefully people will be less confident about their claims.”
Although this study led by Brian Nosek, a psychologist, looks at psychological research he believes that other scientific fields are likely to have much in common with psychology (as other fields are known to have problems with replication too, eg only 6 of 53 promising cancer drugs had results that replicated). Ioannidis would no doubt agree.
LOVE the title of this article:
You can also try a Google Site Search
Separate names with a comma.