• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Closed Thinking: Without scientific competition & open debate, much psychology research goes nowhere

Dolphin

Senior Member
Messages
17,567
This has no direct relevance to ME/CFS; it's just some of us feel it's useful to understand psychology and psychological research. Also, it can be interesting to read criticisms that are being made.

Closed Thinking
Without scientific competition and open debate, much psychology research goes nowhere
By Bruce Bower
Web edition: May 16, 2013
Print edition: June 1, 2013; Vol.183 #11 (p. 26)
http://www.sciencenews.org/view/feature/id/350464/description/Closed_Thinking
I saw James C. Coyne plug this on Twitter. A quick search of Twitter suggests it's of interest to quite a few/lot of psychologists.

This is heavier than the sort of piece one might find in a daily newspaper (Science News is like a journal, it seems).
To be honest, I didn't understand a couple of things, and would need to see a few examples to feel confident of understanding fully some other points. However, I thought it was interesting to see some sorts of criticisms that are leveled at psychology.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
None of this is to say that psychology has no genuine theories, but many of them exist in splendid isolation. Most psychologists work in narrow communities, such as developmental psychology and social psychology, where established theories are rarely challenged.

I have said the same thing about BPS theories, and in particular BPS ME theories. They are such a small community of researchers that proper review of their work is less likely to occur. Methodologies practiced in such areas tend to not be properly reviewed either. Its something I hope to look into more in time.
 

Esther12

Senior Member
Messages
13,774
And indeed, as in other seminars Fiedler has run, only a few of the psychologists at the Dutch seminar came up with anything when they were asked to name an experiment that included a competing account for any set of results.

This is something that often amazes me with psych papers. There will sometimes be a very cursory reference to the fact that, actually, the results presented could be explained by all manner of different things, but that rarely seems to stop someone from spending almost the entire paper pushing their own pet theory, regardless of how weak it is.

The discussion section of most papers should really be limited to "We did this poorly designed experiment that could never have really shown much, here are our results:"
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
This is from the main paper used as a reference for the article above The Long Way From α-Error Control to Validity Proper

False negatives - rejecting true theories - could be a bigger problem
One point that struck me was the emphasis on needing to avoid false negatives as well as false positives. False positives, where researchers wrongly reject the null hypothesis and assume they have found something, have recieved most attention. False negatives, where researcher wrongly reject a correct hypothesis have had less attention.

And this article argues that false negatives are a bigger problem: False positives can be shown to be wrong by replication (ha, if it ever happened!), while false negatives mean correct new theories tend to be abandonded - that's why it can be a problem, as it stiflles the very new theories that would make real progress. The statistical improvements that minimise false positives, also make false negatives more likely.

however...
Lets not get carried away. Most psych studies have a power (beta) to detect a true positive of about 55-80%. So in the exceptionally unlikely scenario of every tested hypothesis being correct, published results shold only show positive resutls 55-80% of the time, and falsely reject true positives 20-45% of the time. Instead, over 95% of all published results are positive - more than theoretically possible if all hypotheses are true! That's a pretty good sign the system is not working.

Focusing on alternative hypothesis
In my stats course, the lecturer (a cognitive psychology prof) had a very dim view of the null hypothesis. The idea that a theory/model was 'better than nothing' struck him as deeply impressive. He preferred comparing competing models, to see which one worked best. It's not good showing your model is (weakly) consistent with the data - does it do better than credible alternatives? That involves, eg comparing CBT models with models that say improvements are due to behaviour changes, to a model that assumes behaviour changes in response to reduced fatigue.

Robust, testable models are the way to evolve science
The author was arguing that trying to improve statistical rigour to reduce false positives wasn't enough on it's own. to really make progress, psychologists needed to work on generating theories that make clear, testable predictions (both what should happen, and what should not) - and test those against other credible theories, not the null hypothesis.
 

Dolphin

Senior Member
Messages
17,567
False negatives - rejecting true theories - could be a bigger problem
One point that struck me was the emphasis on needing to avoid false negatives as well as false positives. False positives, where researchers wrongly reject the null hypothesis and assume they have found something, have recieved most attention. False negatives, where researcher wrongly reject a correct hypothesis have had less attention.

And this article argues that false negatives are a bigger problem: False positives can be shown to be wrong by replication (ha, if it ever happened!), while false negatives mean correct new theories tend to be abandonded - that's why it can be a problem, as it stiflles the very new theories that would make real progress. The statistical improvements that minimise false positives, also make false negatives more likely.

I imagine this could be a problem in the ME/CFS field generally, not just with psychological theories/similar, at least for subsets (which may not be strictly what is being referred to) i.e. that a certain abnormality could exist for some of the patients even if overall, there isn't a clear signal.