Some research turns out to be wrong in every scientific field, but how big a problem is flawed research in psychology? The Reproducibility Project aims to find out, by systematically trying to replicate studies published in three prominent psychology journals.
Brian Nosek, a psychology professor, leads a project that brings together researchers in a large-scale, open collaboration. To date, 72 researchers from around the world have signed up, and while the project may not be able to tackle every study published in the 3 journals in 2008, they will certainly have a very large sample of replications.
All this might make some authors rather nervous. Even so, the Reproducibility project will actively seek input from the original authors to ensure that replication attempts are as close to the original experiment as possible. And if results don't replicate it doesn't necessarily mean the original was wrong: either study could be wrong, or it could be chance variation.
Nonetheless, the overall replication rate - how many of all the attempted replications are successful - should give an indication of the overall reliability of peer-reviewed psychological research. If it's under half - and that's exactly what research guru John Ioannidis argues is the norm - then psychology will know it has a very big problem.
Will other areas of science be bold enough to ask the same questions of their research?
Brian Nosek, a psychology professor, leads a project that brings together researchers in a large-scale, open collaboration. To date, 72 researchers from around the world have signed up, and while the project may not be able to tackle every study published in the 3 journals in 2008, they will certainly have a very large sample of replications.
All this might make some authors rather nervous. Even so, the Reproducibility project will actively seek input from the original authors to ensure that replication attempts are as close to the original experiment as possible. And if results don't replicate it doesn't necessarily mean the original was wrong: either study could be wrong, or it could be chance variation.
Nonetheless, the overall replication rate - how many of all the attempted replications are successful - should give an indication of the overall reliability of peer-reviewed psychological research. If it's under half - and that's exactly what research guru John Ioannidis argues is the norm - then psychology will know it has a very big problem.
Will other areas of science be bold enough to ask the same questions of their research?
Likes:
Esther12, MeSci and alex3619