Simon
Senior Member
- Messages
- 3,789
- Location
- Monmouth, UK
Psychologists to the rescue?
A piece in the Guardian newspaper yesterday recaps the widespread failings in scientific research, from outright fraud to flawed data analysis. The authors also set out how the current system encourages bad science, so that novel findings have become more important than finding the truth. They want to see more replication, to keep scientists honest, but they also want a new system that rewards good science over publishing lots of papers.
More on this in a moment, but it's intriguing how psychology research takes centre-stage in these debates--particularly as psychology research has made so many claims about the causes of CFS. Certainly, many of the most famous examples of fraud and dubious practice come from psychology research. But it's also true that many of those pointing the finger, and arguing for higher standards, are themselves psychologists. The Guardian article is a case in point:
Scientific fraud is rife: it's time to stand up for good science
By psychologist Pete Etchells and PhD student Suzi Gage.
They point out that, above all, career progression and "success" depend on publication in high-impact journals. So publications become an end in themselves, rather than the goal being to make scientific progress. That gives a strong incentive to get positive results rather than the truth. Other problems they cite include:"Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn't the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
"These scandals highlight deeper cultural problems in academia. Pressure to turn out lots of high-quality publications not only promotes extreme behaviours, it normalises the little things, like the selective publication of positive novel findings – which leads to "non-significant" but possibly true findings sitting unpublished on shelves, and a lack of much needed replication studies.
...
"Problems occur at all levels in the system, and we need to stop stubbornly arguing that "it's not that bad" or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.
...
"[It] is happening because the entire way that we go about funding, researching and publishing science is flawed."
Pulling no punches, the authors say that too often:
- Journals favour positive results over null [negative] findings, even though null findings from a well conducted study are just as informative
- Statistical analyses are hard, and sometimes researchers get it wrong
- The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
"shoddy science and dodgy statistics are accepted for publication by reviewers with inadequate levels of expertise.
Fixing science
The first solution from Etchells and Gage is more replication studies to sift the wheat from the abundant chaff in the literature. Replication keeps science honest.
Their biggest single idea, though, is transparency: "The scientific process must be as open to scrutiny as possible", including pre-registering all studies methodology so that researchers can't just rummage around in the data until they find a 'significant' result. They also want the secretive review process opened up. BioMed Central already do this, so anyone can see if the review process is a rigorous check, or just a cusory check. Others have gone further and argue for openly releasing data, so that other researchers can put the data to the test.
Etchells and Gage are running a session on academic malpractice at a Nature conference next week, looking for more practical ideas on how to fix science research.
Psychologists at the forefront
Etchells is not the only psychologist trying to raise standards. A recent case of fraud by Dirk Smeesters, a Dutch psychologist, was only identified after another psychologist raised suspicions that his data seemed too good to be true. An investigation found that it was, and Smeesters resigned.
21 words to save science
And last month three psychologists published a call for research transparency. They ask all researchers to include a simple, 21-word disclosure "We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study." In lay terms: "we haven't tortured the data to get this result".
As an aside, one of the three authos is Uri Simonsohn, the psychologist who exposed Smeesters fraud.
A Beatles' song can literally make you younger!
The same three psychologists published an eye-catching study last year showing how listening to the song "When I'm Sixty-four" can actually make you younger. Obviously this is absurd, but that was the authors' point. They showed that with enough flexibility in how the study is conducted, and how data is analysed, it's almost inevitable that even some absurd results will be statistically significant.
----------
Will this really make a difference
I suspect that many psychologists are feeling distinctly uncomfortable about these waves of exposure and exhortations to better, more rigorous research. But this process may become unstoppable.
Last week an article by the scourge of biomedical research, John Ioannidis, appeared in the Journal of Psychosomatic Research, home to many papers finding associations between psychological factors and CFS. Ioannidis' article stated that the recently debunked association between Type D personality and cardiac death could be the tip of the iceberg. Time will tell.
There are many more examples of psychologists trying to raise the bar, and I'll try to post a few more examples when I have the energy.