• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

James Coyne: Shhh! Keeping quiet about the sad state of...

Messages
13,774
This blog post from James Coyne focuses on couples therapy for cancer patients, but some of the themes sounded rather familiar:

My recent blog post examining the Triple P Parenting Program literature found that expensive implementations of that program were being justified by data that did not actually support its effectiveness. In this particular case, the illusion was preserved by undeclared financial conflicts of interest of those generating these little studies, but also dominating the peer review process. Null trials were kept from being published or spun to looking like positive trials, and any criticism was suppressed by negative peer reviews recommending rejection.
Most often, in psychotherapy research at least, there are no such obvious financial interests in play. Peer review typically draws upon persons who are identified as experts in an area of research. That sounds reasonable, except that in areas of research dominated by similarly flawed studies, we cannot reasonably expect peer reviewers to be overly critical of studies that share the same flaws as their own.

And then there is the problem of peer reviewers who should be fairer, but whose objectivity is overridden by worry that the credibility of the field would be damaged by any tough tell-it-like-it-is critique. Such well-meaning reviewers may recommend rejection of a manuscript solely on the basis of the authors not playing nice by offering constructive suggestions, rather than commenting on the flaws in the literature that no one else is willing to acknowledge. Conspiracies of silence can develop so that no one comments on the obvious, and anyone inclined to do so is kept out of the published literature.

Systematic reviews and meta-analyses provide opportunities for recognizing larger patterns in a literature and acknowledging the difficulty or impossibility of drawing firm conclusions as to whether interventions actually work from available studies. Yet, too often reviewers simply put lipstick on a pig of a literature, and comment how beautiful it is. Once such summaries are published, the likelihood decreases that anyone will go back to the primary studies and find the flaws, rather than relying on the secondary source that is now available.

Some of the problems he was describing were even worse that that normally found in CFS work. Things like this ring a bell though:

There was almost no evidence that any of these trials had specified a primary outcome ahead of time. Rather, investigators typically administered a number of measures and were free to pick the one that made the trial look best. That is termed selective outcome reporting. Because it had been happening so much in the medical literature, high impact medical journals now require investigators to register their designs and their primary outcomes in a publicly accessible place before they even run the first patient. No pre-registration means no publication in the journal. No such reforms have taken hold in the psychotherapy literature.

There's a section under the sub-heading 'The sandbagging' which mentions 'Hedges’ g' as a (perhaps flawed) way in which meta-analyses can try to account for the problem of small studies showing big positive affects while large studies are negative or show small positive affects - he mainly talks about the politics of this, but the statistical technique could also be interesting to some here. Using small studies to justify big claims is certainly a problem with CFS work, and seems a common problem in psychiatry - the results from PACE and FINE seem much more realistic, and it is only the spin which has allowed them to claim any consistency with past results.
http://blogs.plos.org/mindthebrain/...s-interventions-for-cancer-patients-research/

Whither our attempted criticism of couples research?
Although quite harsh, only one of the five reviewers recommended outright rejection of our commentary, but a number suggested that we be limited to a 400 word letter to the editor with one reference. Based on their near unanimity, the editor rejected our appeal and so we will have the thankless exercise of condensing all our concerns into 400 words. Such strict limits on post publication commentary arose in an era when paper journals were worried about using their scarce page restrictions with letters to the editor. However, this particular journal no longer publishes a paper edition, and so the editor really should reconsider the tokenism of a 400 word letter.
I just cannot get my expectations low enough for academia. I'm so cynical about it all now... but am still regularly disappointed. At 20, I thought trusting expert academics was the intelligent thing to do, and was somewhat sneering about what I saw as anti-intellectualism amongst many of my peers. Actually, they just had a much more realistic view of the human and political nature of academia. Seeing James Coyne struggling to get his criticisms published is partially cheering (at least it's not just CFS), partially terrifying (it's not just CFS).
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
I can't imagine that James Coyne gets invited to many psychologist's parties.
There's a section under the sub-heading 'The sandbagging' which mentions 'Hedges’ g' as a (perhaps flawed) way in which meta-analyses can try to account for the problem of small studies showing big positive affects while large studies are negative or show small positive affects - he mainly talks about the politics of this, but the statistical technique could also be interesting to some here. Using small studies to justify big claims is certainly a problem with CFS work, and seems a common problem in psychiatry - the results from PACE and FINE seem much more realistic, and it is only the spin which has allowed them to claim any consistency with past results.

I just cannot get my expectations low enough for academia. I'm so cynical about it all now... but am still regularly disappointed....Seeing James Coyne struggling to get his criticisms published is partially cheering (at least it's not just CFS), partially terrifying (it's not just CFS).
:) One reason I think conspiracy theories of CFS are overstated: mediocre research, and fields where strongly held views crowd out sound criticism, abound.

Hedges' g (a slight tweek of Cohen's d) is a perfectly good statistical technique, the problem is, as Coyne points out, if you have a meta-analysis where small studies dominate, Hedge's g won't guarantee a reliable answer because there are so many problems with small studies (notably likely publication bias where small negative studies never see the light of day).

[Coyne] Peer review typically draws upon persons who are identified as experts in an area of research. That sounds reasonable, except that in areas of research dominated by similarly flawed studies, we cannot reasonably expect peer reviewers to be overly critical of studies that share the same flaws as their own.
An interesting point.
 
Messages
13,774
fields where strongly held views crowd out sound criticism

A conspiracy without knowing conspirators.

I've found a lot of Noam Chomsky's writing on manufacturing consent to be relevant to this sort of thing. It's annoying to remember myself at 18 being rather dismissive of concerns about this sort of thing, and assuming that academia was a place where people were motivated solely by a desire to pursue truth (other than the minority of evil ones who could be easily spotted).

An interesting point.

I was a bit unsure about that. Surely intellectually honest academics could, without difficulty say something like "unfortunately this work shares the same limitations as this earlier study, and is unable to contribute much to our understanding." Especially as it seems that loads of rubbish studies do mention their failings. I think that turning a blind eye to these things does indicate a desire to mislead, rather than just avoid drawing attention to the problems with ones own work.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I have made similar comments about narrow fields of research like cbt and psychogenic medicine - where is a truly independent expert going to come from for peer review? It is good to be seeing comments like this though: I hope this continues until medicine undergoes a complete paradigm shift. It might happen. :cautious:
 

Dolphin

Senior Member
Messages
17,567
[Coyne] Peer review typically draws upon persons who are identified as experts in an area of research. That sounds reasonable, except that in areas of research dominated by similarly flawed studies, we cannot reasonably expect peer reviewers to be overly critical of studies that share the same flaws as their own.

I've heard some people say we don't need psychologists doing any sort of research. However, I don't think that's realistic as one gets psychological studies with probably all medical conditions (that are not very rare). Surveys and the like can be so easy and cheap to do that I don't see them disappearing. In that scenario, I think it's important to have some sympathetic psychologists e.g. Leonard Jason and some people he's worked with, who can be introducing new, more sympathetic theories and be involved in peer-reviewing, etc..

The same effect is one of the many things that motivates me to try to raise money for biomedical research: hopefully that will keep or bring in sympathetic researchers, as there is probably always likely to be some research being done by CBT/GET/rehab-associated researchers who can be biased towards those approaches/against research that challenges that. Somebody like Peter White for example may do the odd biological study but my impression is he thinks we have the answers already (i.e. graded exercise and related ideas on the theme), and is largely just looking for evidence that can help promote such theories.
 

user9876

Senior Member
Messages
4,556
Here is an example of a reviewer pushing their own views

http://www.biomedcentral.com/imedia/1169912504863162_comment.pdf

Reviewer:
Carmine Pariante
Reviewer's report:
This is an interesting review and it will be helpful in clarifying the simplistic
assumption and ME/CFS and chronic inflammation are the same process. It s
thorough from a biological and mechanistical point of view, however I would
require ( as a Major Compulsory revision) that ME/CFS is given the appropriate
clinical and psychological interpretation. Saying that ME/CFS has mitochondrial
dysfunction at is core is an overstatement, as these are all proposed
mechanisms that are perhaps predisposing or contributing to the illness. For
many, including this reviewer, CFS/ME is predominantly a condition triggered by
excessive rest in predisposed individuals following acute triggers, and its
interpretation requite a psychosocial and psychiatric framework. This needs to
come across more clearly in the review, otherwise the readers may perceive this
review as if the pathogenesis of CFS/ME has been fully discovered (and it is due
to a mitochondria dysfunction) which at this stage cannot be accepted as a
proved statement. I would request that the abstract and that page 14 are
particularly edited to avoid the current emphasis on mitocondria dysfunction
....
 

Sean

Senior Member
Messages
7,378
For many, including this reviewer, CFS/ME is predominantly a condition triggered by
excessive rest in predisposed individuals following acute triggers, and its
interpretation requite a psychosocial and psychiatric framework.


That is pretty blatant bias.