Discussion in 'Latest ME/CFS Research' started by Dolphin, Jan 25, 2011.
Only got significant results because control group got worse!!?
It looks to me that they only got statistically significant improvements (none large) because the control group (who got another intervention) got worse on all the measures.
Also, it looks to me like they should have shown the sort of results I'm copying below but didn't do it (i.e. where is the within group analysis?).
A paired t-test would have been better but we don't have that data. However, lots of times one can't do a paired t-test as data is missing for one reason or another.
Maybe I'm missing something?
People can feel free to use this if they want to write a letter. I've way too much in my "to be done" basket
Analysing all the data in Table 2. We are also told in the text that there was no difference for "CFS symptom frequency" but I think it is slightly misleading not to put it in Table 2 as it looks like everything improved compared to the controls.
POMS-Total mood disturbance
QOLI Raw score
QOLI T score
Total CDC symptom severity
Descriptions of the two interventions
Descriptions of the two interventions
What sort of statistical analysis did they use - was it some sort of ANOVA? And did they try to calculate an effect size as well as stating significance? I'm somwthing of a Stats novice so my questions may be off target.
F-test. Here's an example:
Yes, they calculated a cohen's D effect size. All the ones that were statistically significant had an absolute value of 0.20 to 0.43.
However, these looks like effect sizes of the differences. Given that the control group got worse on all the measures, what is more interesting I think is the changes within the experimental group. Can one calculate effects sizes with the data we have?
I'm not sure based on this comment:
This is what I get when I stick the data we have in for "Perceived stress":
All the figures are given in my second post so if people are used to calculating effect sizes, feel free to do them.
If one uses (mean2-mean1)/SD1=-0.24
It doesn't sound like they had much success. Did they even try to measure functionality? It seems like yet another CBT study; some improvements in symptoms and stress but nothing very dramatic....When are they going to stop throwing money at this? This was a big NIH ROI grant....Don't they know what they know by now?
How very true. do you know how much the study cost?
The answer to that is its really hard to tell. It appears to be part of a 5 year study - which is focused on telephone CBSM (although no tele CBSM was used in this study. It looks like they got some good preliminary data indicating the program was reducing cytokine and inflammatory marker levels and cortisols abnormalities and repackaged it and got more money in 2010 to include looking at those and added patients having partners to the study. I imagine there are other studies coming out.
2006 - $343,000
2007 - $334,000
2008 - $327,000
2010 - $536,000
Grand Total - @ $1,800,000
There appears to the latest iteration of the study...It has a different RO1 number.
I agree on both counts.
Selective reporting of results. Basically they've failed to report the main effect measures (ie the effect of the CBT vs control, and the before/after effect) presumably because they were not significant, as shown by Dolphin's calculation. Doesn't say much for the peer review process or the standards of the journal that they let the authors get away with this. Instead, the authors report the GroupxTime interaction, which is probably only significant because the control group got slightly worse - and even then the results are only just significant.
Study design flaws. As the authors acknowledge in the discussion, the control group is flawed as it invloved far less contact time, and they didn't measure whether or not results were sustained by follow-up. Doing either of these is likely to reduce any measured effect so the fig leaf of significance they've conjured up is likely to disappear.
Surely a sensible response to these results would be to publish what they've got then slip quietly into the night, not to suggest spending more funds and effort pursuing a lost cause? Thanks to Cort for the funding figures.
Finally, it's worth noting that this study was based on using CBT with a stress model for CFS, rather than the more usual deconditioning model, so the results are not directly relevant to the debate about 'usual' CBT for CFS.
Ellen Goudsmit reviews that handbook at:
There are 4 paragraphs on the Weiss et al section although maybe only one on the SMART ENERGY program.
Ellen was annoyed about that handbook and given the Bleijenberg chapter (which I have read) I couldn't blame her. Generally I have no problem with stuff Lenny Jason writes and he was just the editor which isn't the same thing at all as writing "bad" stuff like the Bleijenberg chapter.
Dr. Klimas comments on the study
Mindy Kitei has an excellent interview with Dr. Antoni here
CFS Central sent questions to Dr. Michael Antoni, corresponding author of the new study using cognitive behavioral stress management to treat ME/CFS.
Given the intervention in the Lopez et al. study is quite different from Dutch/UK CBT, I'm not sure it tells us anything about the cure rate from the latter.
Not that I believe that study* really showed "full recovery" in 23%. I have a half finished paper on the topic I might submit sometime.
*Knoop H, Bleijenberg G, Gielissen MF, van der Meer JW, White PD. Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome?Psychother Psychosom. 2007;76(3):171-6. PMID: 17426416
Dr Nancy Klimas comments sound good to me Cort, especially if she going to unravel the claims of Psychos. Personally happy to wait.
Hmmm, something doesn't add up here, am I missing something? Unless either Klimas misinterpreted something he said, or White himself is shockingly ignorant of the research published in his own field of interest and/or engaging in purposeful cherry picking and/or making shit up to be frank.
First of all, Lloyd et al 1993 demonstrated no overall effect in a trial which used CBT, nearly two decades ago, no apparent cures from CBT. Since then, there have been a few other studies which have demonstrated basically no effect worth considering, AFAIK no "25% cure rate". There are one or two highly optimistic studies which do report such rates of "full recovery", which apparently White sees as representative of the entire literature?
All that is before you consider what others have already said, such as the burden is on White to prove he is studying ME/CFS and that there is a reliable 25% "cure" rate in ME/CFS using appropriate measurements rather than dubious fatigue scales etc. Otherwise his claim is just another "CFS assfact" and he may be engaging in outright quackery if pushing those claims onto his patients.
On the study Klimas was involved in (Lopez et al 2011), White could attempt to argue that CBSM is not the same style of CBT he uses. In the full text they claim that "a recent review showed that out of 15 studies, CBT was more successful in alleviating fatigue, depression, physical functioning, and more when compared to usual care.", citing the 2008 Cochrane systematic review on CBT. However, they failed to mention that review deemed those effects for physical functioning and depression (and more?) as non-significant.
White is probably misrepresenting the literature. It's typical for the psychobabble crowd to ignore everything which cannot be contorted to support their pet ideas.
Perhaps there aren't any psychologist/psychiatirst CBT studies? Lloyd is, I think not a psych.
Ian Hickie was also involved, I think he's a psychiatrist.
You can also try a Google Site Search
Separate names with a comma.