Discussion in 'Other Health News and Research' started by Dolphin, Feb 2, 2016.
Have you seen the full article? Love to read it!
Some people get papers for free from sci-hub.io
It does surprise me how little attention these important issues seem to get. A key part of what has gone so wrong with us imo.
Talking more about talking cures: cognitive behavioural therapy and informed consent
C R Blease
This article, from C Blease, who recently gave a talk about CFS and medical ethics, so might interest some.
Its about CBT in general. Her main argument is that while there's evidence CBT works (or at least she thinks there's sufficient evidence - I'm not so sure), there is no evidence to suggest it works via the mechanisms it is supposed to target (that is, to help the patient identify unhelpful cognitions and challenge their validity). Rather, the biggest factors that determine therapeutic success are things like: therapist beliefs (how much they buy into the theory), therapist-patient alliance, and patient expectation.
So, then if the therapist says "CBT works like this..." and then spouts the unhelpful cognition stuff, they are engaging in pseudoscience. Its no different from saying that magnet therapy will realign your energies.
Available here in fulltext (short):
These are factors that can introduce bias on subjectively rated measures.
Therapist beliefs: a therapist that already believes in the therapy may rate it as more effective than it actually is. The enthusiasm may also affect patient expectation.
Therapist-patient alliance: patients may rate a therapy as more effective than it is out of politeness. Patients may be more willing to conform to the therapist views if there is a strong therapeutic alliance.
Patient expectations: if expectations are high, patients are more likely to describe the therapy as more ffective than it actually is.
It seems likely that the efficacy is overstated and in part an artifact of methodological weaknesses.
Yes, I agree - Blease takes as a given that these treatments are effective, doesn't go into the worrying situation when the only elements of your treatment that work are subject to artefact.
If I were a CBT practitioner, I'd be very worried now.
Anyone interested in this topic, Blease has also recently co-edited a special issue devoted to psychotherapy and the placebo effect. Pretty much every article says that its silly to try and extract the placebo effect from efficacy studies, its "part of the treatment":
Nobody worries about a nocebo effect?
Co-edited with Irving Kirsch, a powerful placebo guy.
I sometimes get the impression that some people interested in the philosophy/ethics of consent like assuming that placebos have a powerful effect on human health because it gives them an interesting problem to pontificate on.
(Also, I'm not entirely dismissive of placebos having a genuinely positive effect in some situations, even if there seems to be a lot of spin and shoddy research in this area. It seems plausible to me that placebos could have a genuinely positive short-term effect on how people engage with/assess/experience symptoms like pain).
And how does one guage how long the placebo effect lasts?
I don't know. Seems very difficult to distinguish it from biased reporting. I'm opposed to any attempts to use placebo as a treatment (beyond just treating people kindly).
Loading Tweet... https://twitter.com/statuses/798711913630810112
Not on the issue of informed consent but my paper discussed how the collection of data on possible harms from CBT (and GET) (and other nonpharmacological interventions) has been poor which can be associated with people thinking therapies can't be harmful
Please note that if researchers have means of eliminating evidence of harms from data, or even declines in function they refuse to call harms, there will be random variations to support claims of minor improvements. This is a somewhat subtle point that gets overlooked.
I have not said much about this previously because we were dealing with objective data from PACE which showed "recovered" patients functioning like patients with stage II or III heart failure. (Did any reviewer or journalist mention this without prompting?) Compared to this huge oversight, positive results due to random variation plus confusing "intention to treat" numbers with available data numbers were minor problems.
Another minor factor turns up in the well-known problem of test/retest familiarization producing spurious evidence of minor improvement.
Objective measures of positive response to treatment were discordant, with the ones showing marginal gains compromised by allowing patients who didn't feel like walking to decline the test. This was not the case for the step test, which thus showed no real change in physical performance. This negative result was selectively ignored, and not published until the paper on long-term follow-up.
Does anyone know of any legitimate reason for delayed reporting of results the authors did not like?
The end result was questionable evidence of some minor positive effect, and the author's interpretation was that patients therefore needed much more of the same. They had actually falsified the "false illness beliefs" hypothesis, and failed to produce actual support for the idea that more of the same would be better.
The only way I can see to get from those data to their conclusion is through deliberate misrepresentation. Then again, I am not a specialist familiar with treating people who are deep in self-delusion.
Small correction: Both the long-term follow-up paper and a mediation paper were published in 2015. The step test results were in the mediation paper.
Right, I get the material in various papers mixed up. This is probably not an accident.
Consider the comment in the paper on the economic analysis that there was a small increase in participants' need for government spending, though exact what spending and what the authors consider small is a good question. This is a qualitative statement based on actual numbers available to the authors, and we are left with an extremely vague idea of what it means. The first time through I actually missed this statement.
For comparison the statement that the "robust" benefits reported in the original trial could be expected to continue into the indefinite future was central to the arguments of the economic analysis. This was strange when you looked at results of long-term follow-up published at about the same time. I had the strong suspicion those authors did not expect people to compare claims and data published in different papers.
The real question is why the questionable data from the walk test, which the authors themselves later claimed was poorly implemented, were released in 2011, while the data from the step test did not appear until 2015. I can't figure out any way that data might not have been available at the time of the original publication.
Failing to complete the use of Actimeters as originally intended is one indication that the authors did not take physical performance seriously. Making all objective measurements secondary, with questionnaires primary, is another. Selectively delaying data from a test of physical performance which had at least the advantage that all participants took the test, while championing a different test with the major flaw that 1/3 of those treated did not provide data, leaves me with the suspicion they only wanted the term "objective" as a word to bandy about.
You can also try a Google Site Search
Separate names with a comma.