1. Patients launch a $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!

True or false? Roadtesting psychology research

Blog entry posted by Simon, Nov 11, 2012.

Some research turns out to be wrong in every scientific field, but how big a problem is flawed research in psychology? The Reproducibility Project aims to find out, by systematically trying to replicate studies published in three prominent psychology journals.

Brian Nosek, a psychology professor, leads a project that brings together researchers in a large-scale, open collaboration. To date, 72 researchers from around the world have signed up, and while the project may not be able to tackle every study published in the 3 journals in 2008, they will certainly have a very large sample of replications.

All this might make some authors rather nervous. Even so, the Reproducibility project will actively seek input from the original authors to ensure that replication attempts are as close to the original experiment as possible. And if results don't replicate it doesn't necessarily mean the original was wrong: either study could be wrong, or it could be chance variation.

Nonetheless, the overall replication rate - how many of all the attempted replications are successful - should give an indication of the overall reliability of peer-reviewed psychological research. If it's under half - and that's exactly what research guru John Ioannidis argues is the norm - then psychology will know it has a very big problem.

Will other areas of science be bold enough to ask the same questions of their research?
MeSci and alex3619 like this.
Simon

About the Author

Simon McGrath has been ill too long and read far too many crummy research papers.
  1. Simon
    @Little Bluestem
    Actually, a new paper from John Ioannidis estimates (from modelling) that an incorrect 'successful' replication 'perpetuated fallacy' is more likely than a proper 'self-correction' where the replication correctly fails to repeat the original false positive. However, I'm not sure what assumptions were used in this modelling.
    http://pps.sagepub.com/content/7/6/645.full

    "Perpetuated fallacy occurs when both the discovery and the replication are wrong (e.g., because the same errors or biases (or different ones) distort the results)."
  2. Simon
    @Little Bluestem
    "Ensuring that the replication attempts are as close to the original experiment as possible may simply replicate errors in the original experiment and thus replicate erroneous conclusions."

    That's true, and something the studies organisers acknowledge.

    However, many of the ways of massaging an experiment to get a positive result don't depend on the fundamental design of the study but on the way it's tweaked and analysed. So, for instance, a researcher might peak at the data and if the results aren't quite significant they will add a few more subjects until it is. Or they will analyse in several different ways until by chance find a significant result. In the replication there is no such wriggle-room: all the choices on number of subjects and how to analyse have already been defined in the original study. If the positive original results was just a chance outcome found by rummaging in the data, it is extremely unlikely to repeat in an exact replication.

    Here's a really good example of how much difference such rummaging can make to results:
    http://forums.phoenixrising.me/index.php?threads/science-is-broken-fixing-fiddling-and-fraud-in-research.20190/

    "A Beatles' song can literally make you younger!
    The same three psychologists published an eye-catching study last year showing how listening to the song "When I'm Sixty-four" can actually make you younger. Obviously this is absurd, but that was the authors' point. They showed that with enough flexibility in how the study is conducted, and how data is analysed, it's almost inevitable that even some absurd results will be statistically significant."
  3. Little Bluestem
    Ensuring that the replication attempts are as close to the original experiment as possible may simply replicate errors in the original experiment and thus replicate erroneous conclusions.