• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial follow-up: Here's the table looking at the effects of having CBT or GET after 52 weeks

Dolphin

Senior Member
Messages
17,567
I think there should be some focus on the table in the Appendix that actually looked at the effects of CBT and GET after treatment so thought I would give it it's own thread:

Sharpe 2015 Table 3 Appendix.png


The authors suggest that it is the CBT and GET after APT and specialist medical care alone that is the reason the differences between the groups disappeared. However the table doesn't bear this out.

Indeed those that had 10 or more sessions of CBT and GET tended to have the lowest improvements of the three groups ((i)10+ sessions of CBT and GET post-trial; (ii) 1-9 sessions of CBT or GET and (iii) no sessions of CBT or GET post trial). (We're looking at the first two columns)

(For the Chalder Fatigue Questionnaire (CFQ), the lower the score the better the result. For the SF-36 physical functioning subscale, it's the opposite: the higher the score the better. The main thing of interest is the change scores i.e. mean difference).
 
Last edited:
Messages
15,786
Also of interest in that table is that you can calculate the mean SF36-PF scores. CBT and GET patients had a score of 61.4 on average at the 2.5 year followup. That's roughly equivalent to the score of a 75 year old.

"Sustained recovery," my ass :rofl:
 
Last edited:
Messages
86
Location
East of England
I think there should be some focus on the table in the Appendix that actually looked at the effects of CBT and GET after treatment so thought I would give it it's own thread:

View attachment 13280

The authors suggest that it is the CBT and GET after APT and specialist medical care alone that is the reason the differences between the groups disappeared. However the table doesn't bear this out.

Indeed those that had 10 or more sessions of CBT and GET tended to have the lowest improvements of the three groups ((i)10+ sessions of CBT and GET post-trial; (ii) 1-9 sessions of CBT or GET and (iii) no sessions of CBT or GET post trial). (We're looking at the first two columns)

(For the Chalder Fatigue Questionnaire (CFQ), the lower the score the better the result. For the SF-36 physical functioning subscale, it's the opposite: the higher the score the better. The main thing of interest is the change scores i.e. mean difference).

Thanks for posting this table. Struggling a bit to understand it - have a couple of questions. First this quote

Dr Kimberley Goldsmith from the Institute of Psychiatry, Psychology & Neuroscience at King’s College London said: We found that participants who had originally been given SMC or APT appeared to be doing as well as those who had CBT or GET in the longer term. However as many had received CBT or GET after the trial, it does not tell us that these treatments have as good a long term outcome as CBT and GET.’

Am I right in thinking that the second sentence is actually inaccurate as they have measured the outcomes of those in the SMC and APT group who then went on to have CBT / GET? (the top two and the middle two rows of the table) compared to those who haven't. And the improvements were worse for the +10 sessions of CBT / GET.

Sorry if this is a daft question but is the mean difference the average (i.e. all added up and divided by the number of participants as opposed to say median) difference of the participants SF36 and CFQ scores at the end of the trial and then at the end of the 52 week review?

And the researchers are saying that there was more improvement in the CBT / GET groups than SMC and APT by the end of the trial, but at the end of the 30 months this improvement had gone, and this is because the SMC and APT groups went on to have sessions of CBT / GET? You are saying that those who had 10+ sessions of CBT / GET (deemed by the researchers as adequate treatment) actually had lower improvement levels than 1-9 of CBT / GET or no CBT / GET. The 1-9 sessions improvement level is a bit confusing though as it is higher (if I have read the tables correctly) than that for 10+ sessions and no sessions.

I'm struggling with interpreting the mean difference scores in the context that you mention. In the second row of the table (SF36+10 or sessions of CBT / GET) the mean differences are
APT 5.4
SMC 5.2
CBT 14.3
GET 2.8

Does that mean that in this group (+ 10 sessions CBT / GET) the most improvement was in the CBT arm , and the least in the GET arm with APT and SMC in the middle?

And for the CFQ + 10 sessions CBT / GET the most improvement was CBT (-6.4), then APT, then GET, then SMC? Or have I got it the wrong way round?

Then looking down the table, for the SMC and APT group the mean differences for SF36 and CFQ were
+ 10 sessions of CBT / GET: 5.4 and 5.2 (CFQ -2.7 & -3.6)
1 - 9 sessions of CBT / GET: 11.6 and 11.3 (CFQ -5.4 & -1.6)
No post trial CBT / GET: 5.8 and 8.5 (CFQ -3.9 & -3.5)

Does this mean that the most improvement was in the 1-9 sessions of CBT / GET on the SF36 scale and on the CFQ this group had the most improvement -5.4 for SMC?

Thanks but no worries if you don't want to plough through all those questions :eek:
 
Last edited:
Messages
86
Location
East of England
Also of interest in that table is that you can calculate the mean SF36-PF scores. CBT and GET patients had a score of 61.4 on average at the 2.5 year followup. That's roughly equivalent to the score of a 75 year old.

"Sustained recovery," my ass :rofl:

Is that the average across the 3 groups?
Really brings it home when you compare with the expected PF scores.
But wasn't this one of the things they changed along the way, initially comparing with working age population and then with the whole population?
Do you have a link to SF - 36 PF scores relative to age? Would be handy to have a look at.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Do you have a link to SF - 36 PF scores relative to age? Would be handy to have a look at.
I don't have a link handy, but when I made the comparison some years ago the recovery threshold was set at the capacity of an 80 year old. If that's recovery I don't want it. Try telling that to a twenty year old former athlete who managed to scrape a score of 60. Mind you, that is not a fit, energetic 80 year old. That is an 80 year old with all the commonly associated health problems, and more than likely to die in the next odd few years.
 
Messages
15,786
Is that the average across the 3 groups?
Yup. The 244 original CBT/GET patients who had 10+, 1-9, and 0 additional sessions have a median SF36-PF score of 61.4 (14,987.6 / 244). I didn't bother calculating CFQ scores, because it's a useless scale which no one else uses.
But wasn't this one of the things they changed along the way, initially comparing with working age population and then with the whole population?
I think the issue was that they expanded the "recovered" range by including a standard deviation. So 85+ was recovered in the protocol, but they changed it to 60+ after they started seeing the data rolling in. But since SF36-PF scores are skewed, with nearly everyone in the 95-100 (maximum) range in the general public, standard deviations are meaningless and inappropriate to use with that data. One of the basic rules of statistics is that standard deviations should only be used with normally distributed data, which comes out looking like a bell curve when graphed.
Do you have a link to SF - 36 PF scores relative to age? Would be handy to have a look at.
Table 3 of http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2654814/ which can be opened separately at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2654814/table/T3/ . It's based on data from the 1996 Health Survey for England.
 

Snookum96

Senior Member
Messages
290
Location
Ontario, Canada
Yeah, that's why they aren't claiming those results are statistically significant. Though they are playing dumb about having the data, by speculating that the data might show something which it clearly does not.

That's ridiculous. They know that most people don't really understand what statistically significant means? What kind of idiot publishes stats that aren't statistically significant.

Douchebags.
 
Messages
15,786
That's ridiculous. They know that most people don't really understand what statistically significant means? What kind of idiot publishes stats that aren't statistically significant.
It's normal (and good) to publish the insignificant data. But it's dishonest for them to pretend that the gains in the original APT and SMC patients at the 2.5 year followup might be due to them getting extra CBT or GET after the trial ended.

Their own data definitively proves that post-trial CBT and GET are not even potentially responsible for the long-term gains seen in the original APT and SMC patients.
 

wdb

Senior Member
Messages
1,392
Location
London
Could someone double check these graphs I made from the table, I feel like I must have made a mistake. But if I haven't then it it very much goes against the suggestion then the SMC/APT groups improved due to addition post trial CBT/GET.


mean.png



mean-diff.png
 

A.B.

Senior Member
Messages
3,780
One of the points raised by Coyne is that the follow up data is uninterpretable because it's no longer gathered in a randomized clinical trial.

Coyne said:
Following completion of the treatment to which particular patients were randomly assigned, the PACE trial offered a complex negotiation between patient and trial physician about further treatment. This represents a thorough breakdown of the benefits of a controlled randomized trial for the evaluation of treatments. Any focus on the long-term effects of initial randomization is sacrificed by what could be substantial departures from that randomization. Any attempts at statistical corrections will fail.

For example, we can see patients who were assigned to SMC but didn't do CBT and GET had lower fatigue scores than those who did, but we don't know what this means because things are no longer randomized.
 
Last edited:
Messages
15,786
Could someone double check these graphs I made from the table, I feel like I must have made a mistake.
I printed out that page from the supplement and checked it against your graph values. It all looks accurate. I thought I saw problems a couple times, but checked again and it was correct ... looking at the data just makes anyone go a bit cross-eyed I think, especially since the order is reversed :p
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I don't remember a lot from stats courses but aren't the p-values for GET kinda high? I remember the minimum we would accept was 0.05 I think.
That is what is often accepted, but modern calculations show that, even without bias skewing the p values, a p value of 0.05 can reflect an error rate of around 30%.

If there is systemic and entrenched bias, deep bias, then its possible to get really low p values, under 0.01. P values are an indicator, but cannot sort out highly biased studies, nor outright fraud.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Ha, @wdb, great minds think alike! But I've done it slightly differently. Please let me know if anyone spots any errors.

I thought this might be easier to intepret with just the key figures of mean difference for the SMC & APT patients during follow-up, by number of sessions of CBT/GET they had

upload_2015-10-30_11-51-43.png


upload_2015-10-30_11-51-13.png


added: obviously 1-9 sessions is best :); ok, it's just noise, and treatment or not made no difference.
 

Attachments

  • upload_2015-10-30_11-49-0.png
    upload_2015-10-30_11-49-0.png
    5.4 KB · Views: 13

Snookum96

Senior Member
Messages
290
Location
Ontario, Canada
It's normal (and good) to publish the insignificant data. But it's dishonest for them to pretend that the gains in the original APT and SMC patients at the 2.5 year followup might be due to them getting extra CBT or GET after the trial ended.

Their own data definitively proves that post-trial CBT and GET are not even potentially responsible for the long-term gains seen in the original APT and SMC patients.

Makes sense. I would think insignificant data would be more of a note at the end of the study or a brief sentence, rather than the part that makes headlines.