• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Younger generation of psychologists cleaning up science? (Guardian article)

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
The changing face of psychology | Science | theguardian.com

Really interesting piece about problems in psychology and life science generally, and how things are beginning to change.

It starts of by talking about how in 1959 researcher Ted Sterling found that an implausible 97% of papers in 4 major psychology journals found positive effects. He argued that this was clear sign of publication bias:the dud results never got published - an argument that was accepted. Yet in 1995 he repeated the study and little had changed. Things, say the article, are beginning to change now:

Now, finally, the tide is turning. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo

One example of this is on replication (which should be a keystone of scientific research)
How it’s changing: The new generation of psychologists understands that independent replication is crucial for real advancement and to earn wider credibility in science. A beautiful example of this drive is the Many Labs project led by Brian Nosek from the University of Virginia. Nosek and a team of 50 colleagues located in 36 labs worldwide sought to replicate 13 key findings in psychology, across a sample of 6,344 participants. Ten of the effects replicated successfully. [note this was a small set of findings tested, chosen mainly because they were easy to attempt to replicate - basically an online test of all findings in one study]

Journals are also beginning to respect the importance of replication. The prominent outlet Perspectives on Psychological Science recently launched an initiative that specifically publishes direct replications of previous studies. Meanwhile, journals such as BMC Psychology and PLOS ONE officially disown the requirement for researchers to report novel, positive findings. [ie you can now report replications and negative findings: "our hypothesis was not supported by the evidence"]
read the full piece
 

Esther12

Senior Member
Messages
13,774
I would be so pissed off if I was having to treat patients while relying on the piss poor quality of a lot of psych research, and it has surprised me that there hasn't been more anger from about the way research is spun by 'experts' from those lower down the pecking order. Good to see that there's a growing push for change within the system, and that is the impression I've been getting too.

journals that value story-telling over truth

The tolerance for this just amazes me.
 

Snowdrop

Rebel without a biscuit
Messages
2,933
OK I didn't read the whole piece but I did read the excerpt.
Nice to see some potentially good news :)
Thanks for the ray of science sunshine.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
There is nothing new in this article, but its very nice to see it all in one place. I do think that the main critics of psychology and psychiatry today are indeed psychologists and psychiatrists. Its decades overdue, but very welcome. Its also the case that issues with poor scientific methodology, and poor science publication, are critical if science is to advance. This is not just in psychology though, nor even psychiatry. Many of these arguments apply to medical science, and especially to wanna-be science like economics.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I would be so pissed off if I was having to treat patients while relying on the piss poor quality of a lot of psych research, and it has surprised me that there hasn't been more anger from about the way research is spun by 'experts' from those lower down the pecking order.
The anger seems to be (mis)directed toward the patients who don't respond to treatment! The experts can't possibly be wrong so it must be the patients! :confused: (In CFS anyway. I don't think it's so bad in psychiatry in general.)
 

leela

Senior Member
Messages
3,290
great find @Simon. so nice to see psychologists noticing the bad science.

now if they would just publicly face off with the wesselys and whites of the world...
 

Sean

Senior Member
Messages
7,378
Now, finally, the tide is turning. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo.

Have they been following CFS research?

now if they would just publicly face off with the wesselys and whites of the world...

Well, that will be an interesting face off, seeing as one of those two gentlemen just got elected as the head of the UK psychs' union and has shown not the slightest propensity thus far to back down from his rather persistent, even obsessive, and dodgy claims.
 
Last edited:

Sean

Senior Member
Messages
7,378
Whenever experiments rely on inferences from statistics, researchers can exploit “degrees of freedom” in the analyses to produce desirable outcomes. This might involve trying different ways of removing statistical outliers or the effect of different statistical models, and then only reporting the approach that “worked” best in producing attractive results.

Looks like they have been following CFS research!
 

biophile

Places I'd rather be.
Messages
8,977
journals that value story-telling over truth

Horton of the Lancet whining on radio and in print about the response to PACE and being inundated with "dozens of letters" (diddums) but meanwhile refusing to publish any corrections to errors which he allowed to be published in the first place.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
5. Limiting researcher “degrees of freedom”

The problem: In psychology, discoveries tend to be statistical. This means that to test a particular hypothesis, say, about motor actions, we might measure the difference in reaction times or response accuracy between two experimental conditions. Because the measurements contain noise (or “unexplained variability”), we rely on statistical tests to provide us with a level of certainty in the outcome. This is different to other sciences where discoveries are more black and white, like finding a new rock layer or observing a supernova.

Whenever experiments rely on inferences from statistics, researchers can exploit “degrees of freedom” in the analyses to produce desirable outcomes. This might involve trying different ways of removing statistical outliers or the effect of different statistical models, and then only reporting the approach that “worked” best in producing attractive results. Just as buying all the tickets in a raffle guarantees a win, exploiting researcher degrees of freedom can guarantee a false discovery.


The reason we fall into this trap is because of incentives and human nature. As Sterling showed in 1959, psychology journals select which studies to publish not based on the methods but on the results: getting published in the most prominent, career-making journals requires researchers to obtain novel, positive, statistically significant effects. And because statistical significance is an arbitrary threshold (p<.05), researchers have every incentive to tweak their analyses until the results cross the line. These behaviours are common in psychology – a recent survey led by Leslie John from Harvard University estimated that at least 60% of psychologists selectively report analyses that “work”. In many cases such behaviour may even be unconscious.