- Messages
- 13,774
This blog post from James Coyne focuses on couples therapy for cancer patients, but some of the themes sounded rather familiar:
Some of the problems he was describing were even worse that that normally found in CFS work. Things like this ring a bell though:
There's a section under the sub-heading 'The sandbagging' which mentions 'Hedges’ g' as a (perhaps flawed) way in which meta-analyses can try to account for the problem of small studies showing big positive affects while large studies are negative or show small positive affects - he mainly talks about the politics of this, but the statistical technique could also be interesting to some here. Using small studies to justify big claims is certainly a problem with CFS work, and seems a common problem in psychiatry - the results from PACE and FINE seem much more realistic, and it is only the spin which has allowed them to claim any consistency with past results.
http://blogs.plos.org/mindthebrain/...s-interventions-for-cancer-patients-research/
My recent blog post examining the Triple P Parenting Program literature found that expensive implementations of that program were being justified by data that did not actually support its effectiveness. In this particular case, the illusion was preserved by undeclared financial conflicts of interest of those generating these little studies, but also dominating the peer review process. Null trials were kept from being published or spun to looking like positive trials, and any criticism was suppressed by negative peer reviews recommending rejection.
Most often, in psychotherapy research at least, there are no such obvious financial interests in play. Peer review typically draws upon persons who are identified as experts in an area of research. That sounds reasonable, except that in areas of research dominated by similarly flawed studies, we cannot reasonably expect peer reviewers to be overly critical of studies that share the same flaws as their own.
And then there is the problem of peer reviewers who should be fairer, but whose objectivity is overridden by worry that the credibility of the field would be damaged by any tough tell-it-like-it-is critique. Such well-meaning reviewers may recommend rejection of a manuscript solely on the basis of the authors not playing nice by offering constructive suggestions, rather than commenting on the flaws in the literature that no one else is willing to acknowledge. Conspiracies of silence can develop so that no one comments on the obvious, and anyone inclined to do so is kept out of the published literature.
Systematic reviews and meta-analyses provide opportunities for recognizing larger patterns in a literature and acknowledging the difficulty or impossibility of drawing firm conclusions as to whether interventions actually work from available studies. Yet, too often reviewers simply put lipstick on a pig of a literature, and comment how beautiful it is. Once such summaries are published, the likelihood decreases that anyone will go back to the primary studies and find the flaws, rather than relying on the secondary source that is now available.
Some of the problems he was describing were even worse that that normally found in CFS work. Things like this ring a bell though:
There was almost no evidence that any of these trials had specified a primary outcome ahead of time. Rather, investigators typically administered a number of measures and were free to pick the one that made the trial look best. That is termed selective outcome reporting. Because it had been happening so much in the medical literature, high impact medical journals now require investigators to register their designs and their primary outcomes in a publicly accessible place before they even run the first patient. No pre-registration means no publication in the journal. No such reforms have taken hold in the psychotherapy literature.
There's a section under the sub-heading 'The sandbagging' which mentions 'Hedges’ g' as a (perhaps flawed) way in which meta-analyses can try to account for the problem of small studies showing big positive affects while large studies are negative or show small positive affects - he mainly talks about the politics of this, but the statistical technique could also be interesting to some here. Using small studies to justify big claims is certainly a problem with CFS work, and seems a common problem in psychiatry - the results from PACE and FINE seem much more realistic, and it is only the spin which has allowed them to claim any consistency with past results.
http://blogs.plos.org/mindthebrain/...s-interventions-for-cancer-patients-research/
I just cannot get my expectations low enough for academia. I'm so cynical about it all now... but am still regularly disappointed. At 20, I thought trusting expert academics was the intelligent thing to do, and was somewhat sneering about what I saw as anti-intellectualism amongst many of my peers. Actually, they just had a much more realistic view of the human and political nature of academia. Seeing James Coyne struggling to get his criticisms published is partially cheering (at least it's not just CFS), partially terrifying (it's not just CFS).Whither our attempted criticism of couples research?
Although quite harsh, only one of the five reviewers recommended outright rejection of our commentary, but a number suggested that we be limited to a 400 word letter to the editor with one reference. Based on their near unanimity, the editor rejected our appeal and so we will have the thankless exercise of condensing all our concerns into 400 words. Such strict limits on post publication commentary arose in an era when paper journals were worried about using their scarce page restrictions with letters to the editor. However, this particular journal no longer publishes a paper edition, and so the editor really should reconsider the tokenism of a 400 word letter.