• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Are meta-analyses done by promoters of psychological treatments as tainted as those done by Pharma?

Dolphin

Senior Member
Messages
17,567
Are meta-analyses done by promoters of psychological treatments as tainted as those done by Pharma?

By

James C. Coyne

(May 20)

http://blogs.plos.org/mindthebrain/...treatments-tainted-meta-analyses-done-pharma/

James C. Coyne is an influential psychologist who isn't afraid to criticise other psychological researchers if they don't maintain high standards.

This probably isn't his best piece, in my opinion. Most of it is on triple P parenting program but one has to read quite a bit to get to his main point(s).

Anyway, here's a piece near the start that I thought was interesting enough:

Meta-analyses everywhere, but not a critical thought…

What is accepted as necessary for drug trials is routinely ignored for trials of psychotherapy treatments and meta-analyses integrating their results. Investigator allegiance has been identified as one of the strongest predictors of outcomes, regardless of the treatment that is being evaluated. But this does not get translated into enforcement of disclosures of conflict of interests by authors or by readers having heightened skepticism.

Yet, we routinely accept claims of superiority of psychological treatments made by those who profit from such advertisements. Meta-analyses allow them to make even bigger claims than single trials. And journals accept such claims and pass them on without conflict of interest statements. We seldom see any protesting letters to the editor. Must we conclude that no one is bothered enough to write?

Meta-analyses of psychological treatments with undisclosed conflicts of interest are endemic. We already know investigator allegiance is a better predictor of the outcome of the trial than whatever is being tested. But this embarrassment is explained away in terms of investigator’s enthusiasm for their treatment. More likely, results are spurious or inflated or spun. There is high risk of bias associated with investigators having a dog in the fight. And a great potential for throwing the match with flexible rules of data selection, analysis, interpretation (DS, A, and I). And then there is hypothesizing after results are known (HARKING).

It is scandalous enough that the investigators can promote their psychological products by doing their own clinical trials, but they can go further, they can do meta-analysis. After all, everybody knows that a meta-analysis is a higher form of evidence, a stronger claim, than an individual clinical trial.

Because meta-analyses are considered the highest form of evidence for interventions, they provided an important opportunity for promoters to brand their treatments as evidence-supported. Such a branding is potentially worth millions in terms of winning contracts from governments for dissemination and implementation, consulting, training, and sale of materials associated with the treatment. Readers need to be informed of potential conflict of interest of the authors of meta-analyses in order to make independent evaluations of claims.

And the requirement of disclosure ought to apply to reviewers who act as gatekeepers for what gets into the journals. Haven’t thought of that? I think you soon will be. Read on.

There is a Cochrane review of GET being undertaken at the moment by GET researchers. I think it is useful to know the sorts of criticisms that can be made of meta-analyses.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
As I mentioned, it takes quite a lot of reading to find out Dr. Coyne's concerns about the Triple P Parenting evidence based. Eventually he says:

You can read the open access article http://www.biomedcentral.com/1741-7015/11/11, but here is the crux of my critique

Many of the trials evaluating Triple P were quite small, with eight trials having less than 20 participants (9 to 18) in the smallest group. This is grossly inadequate to achieve the benefits of randomization and such trials are extremely vulnerable to reclassification or loss to follow-up or missing data from one or two participants. Moreover, we are given no indication how the investigators settled on an intervention or control group this small. Certainly it could not have been decided on the basis of an a priori power analysis, raising concerns of data snooping [14] having occurred. The consistently positive findings reported in the abstracts of such small studies raise further suspicions that investigators have manipulated results by hypothesizing after the results are known (harking) [15], cherry-picking and other inappropriate strategies for handling and reporting data [16]. Such small trials are statistically quite unlikely to detect even a moderate-sized effect, and that so many nonetheless get significant findings attests to a publication bias or obligatory replication [17] being enforced at some points in the publication process.

Just before it he quoted his own abstract:
Such trials are particularly susceptible to risks of bias and investigator manipulation of apparent results. We offer a justification for the criterion of no fewer than 35 participants in either the intervention or control group. Applying this criterion, 19 of the 23 trials identified by Wilson et al. were eliminated.

Given the small numbers often used in CFS trials, I find this interesting. Quite a few might get excluded with this criteria.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Most CFS studies would be considered pilot studies. Typically we cannot get enough funding to do larger studies. Yet this may also be why, in part, there have been inconsistent findings in some of the immunology, quite aside from problems with reliability of cytokine testing, cohort issues etc.