Julie Rehmeyer's 'Through the Shadowlands'
Writer Never Give Up talks about Julie Rehmeyer's new book "Through the Shadowlands: A Science Writer's Odyssey into an Illness Science Doesn't Understand" and shares an interview with Julie ...
Discuss the article on the Forums.

Closed Thinking: Without scientific competition & open debate, much psychology research goes nowhere

Discussion in 'Other Health News and Research' started by Dolphin, Aug 18, 2013.

  1. Dolphin

    Dolphin Senior Member

    This has no direct relevance to ME/CFS; it's just some of us feel it's useful to understand psychology and psychological research. Also, it can be interesting to read criticisms that are being made.

    I saw James C. Coyne plug this on Twitter. A quick search of Twitter suggests it's of interest to quite a few/lot of psychologists.

    This is heavier than the sort of piece one might find in a daily newspaper (Science News is like a journal, it seems).
    To be honest, I didn't understand a couple of things, and would need to see a few examples to feel confident of understanding fully some other points. However, I thought it was interesting to see some sorts of criticisms that are leveled at psychology.
  2. alex3619

    alex3619 Senior Member

    Logan, Queensland, Australia
    I have said the same thing about BPS theories, and in particular BPS ME theories. They are such a small community of researchers that proper review of their work is less likely to occur. Methodologies practiced in such areas tend to not be properly reviewed either. Its something I hope to look into more in time.
    Sean likes this.
  3. Esther12

    Esther12 Senior Member

    This is something that often amazes me with psych papers. There will sometimes be a very cursory reference to the fact that, actually, the results presented could be explained by all manner of different things, but that rarely seems to stop someone from spending almost the entire paper pushing their own pet theory, regardless of how weak it is.

    The discussion section of most papers should really be limited to "We did this poorly designed experiment that could never have really shown much, here are our results:"
    Valentijn, peggy-sue and alex3619 like this.
  4. Simon


    Monmouth, UK
    This is from the main paper used as a reference for the article above The Long Way From α-Error Control to Validity Proper

    False negatives - rejecting true theories - could be a bigger problem
    One point that struck me was the emphasis on needing to avoid false negatives as well as false positives. False positives, where researchers wrongly reject the null hypothesis and assume they have found something, have recieved most attention. False negatives, where researcher wrongly reject a correct hypothesis have had less attention.

    And this article argues that false negatives are a bigger problem: False positives can be shown to be wrong by replication (ha, if it ever happened!), while false negatives mean correct new theories tend to be abandonded - that's why it can be a problem, as it stiflles the very new theories that would make real progress. The statistical improvements that minimise false positives, also make false negatives more likely.

    Lets not get carried away. Most psych studies have a power (beta) to detect a true positive of about 55-80%. So in the exceptionally unlikely scenario of every tested hypothesis being correct, published results shold only show positive resutls 55-80% of the time, and falsely reject true positives 20-45% of the time. Instead, over 95% of all published results are positive - more than theoretically possible if all hypotheses are true! That's a pretty good sign the system is not working.

    Focusing on alternative hypothesis
    In my stats course, the lecturer (a cognitive psychology prof) had a very dim view of the null hypothesis. The idea that a theory/model was 'better than nothing' struck him as deeply impressive. He preferred comparing competing models, to see which one worked best. It's not good showing your model is (weakly) consistent with the data - does it do better than credible alternatives? That involves, eg comparing CBT models with models that say improvements are due to behaviour changes, to a model that assumes behaviour changes in response to reduced fatigue.

    Robust, testable models are the way to evolve science
    The author was arguing that trying to improve statistical rigour to reduce false positives wasn't enough on it's own. to really make progress, psychologists needed to work on generating theories that make clear, testable predictions (both what should happen, and what should not) - and test those against other credible theories, not the null hypothesis.
    biophile, alex3619, Esther12 and 2 others like this.
  5. Dolphin

    Dolphin Senior Member

    I imagine this could be a problem in the ME/CFS field generally, not just with psychological theories/similar, at least for subsets (which may not be strictly what is being referred to) i.e. that a certain abnormality could exist for some of the patients even if overall, there isn't a clear signal.
    Simon likes this.

See more popular forum discussions.

Share This Page