The ME Association collated links to a lot of the media coverage here:
http://www.meassociation.org.uk/201...-has-been-covering-the-story-28-october-2015/
http://www.meassociation.org.uk/201...-has-been-covering-the-story-28-october-2015/
Welcome to Phoenix Rising!
Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.
To register, simply click the Register button at the top right.
Here's a bigger version of the same image:Might be of interest- I've plotted the fatigue and physical functioning scores (taken from the Lancet article supplementary material http://www.thelancet.com/…/PIIS2215-0366%2815%…/supplemental) as a function of total number of sessions of CBT/GET. Patients in the CBT/GET arm of the trial automatically had 15 sessions during the trial. All participants could have extra sessions after wards. Note lower fatigue and higher physical functions is good.
What are the double markers? What they reported at the end of the study and what they reported at follow up? Different arms of the study?Might be of interest- I've plotted the fatigue and physical functioning scores (taken from the Lancet article supplementary material http://www.thelancet.com/…/PIIS2215-0366%2815%…/supplemental) as a function of total number of sessions of CBT/GET. Patients in the CBT/GET arm of the trial automatically had 15 sessions during the trial. All participants could have extra sessions after wards. Note lower fatigue and higher physical functions is good.
What are the double markers? What they reported at the end of the study and what they reported at follow up? Different arms of the study?
So if I'm reading this correctly, the people in the APT/SMT arms who bought into the CBT hype and tried it post-study got worse during the follow-up period. Maybe they were just bummed that the magical therapy didn't actually work despite the hype and that disappointment/anger/frustration was reflected in how they answered the questionnaires.Might be of interest- I've plotted the fatigue and physical functioning scores (taken from the Lancet article supplementary material http://www.thelancet.com/…/PIIS2215-0366%2815%…/supplemental) as a function of total number of sessions of CBT/GET. Patients in the CBT/GET arm of the trial automatically had 15 sessions during the trial. All participants could have extra sessions after wards. Note lower fatigue and higher physical functions is good.
So if I'm reading this correctly, the people in the APT/SMT arms who bought into the CBT hype and tried it post-study got worse during the follow-up period. Maybe they were just bummed that the magical therapy didn't actually work despite the hype and that disappointment/anger/frustration was reflected in how they answered the questionnaires.![]()
Oh, right.They certainly ended up with worse scores. But we can't say from this data that they got worse during the CBT/GET follow-up stage - actually they improved on both fatigue and PF (as did everyone else) during this period.
For future records:I think there should be some focus on the table in the Appendix that actually looked at the effects of CBT and GET after treatment.
View attachment 13276
The authors suggest that it is the CBT and GET after APT and specialist medical care alone that is the reason the differences between the groups disappeared. However the table doesn't bear this out. Indeed those that had 10 or more sessions of CBT and GET tended to have the lowest improvements of the three groups ((i)10+ sessions of CBT and GET post-trial; (ii) 1-9 sessions of CBT or GET and (iii) no sessions of CBT or GET post trial).
(We're looking at the first two columns)
(For the Chalder Fatigue Questionnaire (CFQ), the lower the score the better the result. For the SF-36 physical functioning subscale, it's the opposite: the higher the score the better. The main thing of interest is the change scores i.e. mean difference).
Somebody sent me a recording of it. In case anyone wants to store it for the future, it's here:The today program one is on at http://www.bbc.co.uk/programmes/p036fnz0 but not sure how long it will stay.
I put in a complaint about lack of balance quoting Coyne, IoM and the MEA report on harms
Somebody sent me a recording of this. It's available here:Vanessa Feltz on Radio 2 was a touch better. Up against a Dr. Mike Smith (?), she did at least put the idea that it was cruel to push patients through exercises that made them worse but he was having none of it, talking up the great "expertise" of the therapists (as if they aren't just psychiatric nurses armed with a handbook and a nice line in teeth-sucking) and making damn sure we don't fall into self destructive patterns or some such.
It is easy to forget at times that this is not about evidence led policy making, but policy led evidence making.
That is an excellent quote!
Wasn't it Laurie Taylor, of "Thinking Allowed" fame?Sadly I cannot claim that it is original. I have forgotten who deserves the credit and the original context. It was a British journalist about 10 years ago.
PACE - Thoughts about Holes
http://keithsneuroblog.blogspot.co.uk/2015/11/pace-thoughts-about-holes.html
Uninterpretable: Fatal flaws in PACE Chronic Fatigue Syndrome follow-up study
by
James C. Coyne (influential research psychologist)
http://blogs.plos.org/mindthebrain/...ace-chronic-fatigue-syndrome-follow-up-study/