• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

Bob

Senior Member
Messages
16,455
Location
England (south coast)
You're right that CBT/GET basically helped a net 16% extra patients over and above SMC, which is not very impressive. However, I'm not sure the researchers were bound to report this figure as a difference, as opposed to reporting figures for both CBT and SMC. I don't know enough about stats to know what is strictly 'correct'. And it looks like you're quoting the CGI figures for patients who say they are much/very much improved.

But basically when Peter White said in the media that CBT helped 6/10 patients (using their slightly odd definition of 'improved', rather than CGI scores) he was missing out the crucial info that in this case 4.5/10 patients were helped by nothing at all (SMC).

Yes, I'm talking about the CGI figures, where patients reported a significant improvement.
(There are no separate figures for a 'little better'... So I'm assuming that the authors considered these results to be an insignificant change, which is why they haven't included the figures separately, but they are combined with 'no change' and a 'little worse'.)

I'm not really commenting on what the researchers were bound to report...
But they appear to me to have spun the figures because they haven't reported them as a comparison to the control group... So i think it's important for us to pick up on this.

I haven't looked at the other figures yet...
But from what you have said, it looks like only 15% of patients (6/10 - 4.5/10 = 1.5/10 = 15%) were helped by CBT or GET over and above the control group, using the other methods of measurement?
 

anciendaze

Senior Member
Messages
1,841
Cost per meter?

Since we can't develop a cost per cure from a trial that apparently failed to result in any cures which might move someone from disabled to working, what about the published objective results (with actigraph measures eliminated) ?

Could some of our experts on the intricacies of British medical accounting boil these results down into a cost per patient-meter after 52 weeks of therapy?

One possible reason this has not already been done might be because they are holding dramatic results in reserve, based on self-assessment score differences between employed and unemployed therapists.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
A couple of years ago, I attended a group session of GET provided by my local NHS ME service. I didn't find it helpful, and now it turns out that, according to the results of the PACE trial, 84% of the attendees would not have benefited from it. And the 16% of patients who did benefit, would only have seen very minimal changes, according go to the study results. How can the NHS justify using CBT/GET if, in every group GET session of 20 people, only about 3 patients will see any benefit?
 
Messages
13,774
Most of the cost is borne by the patients who are organising their lives around graded exercise. NHS accountants aren't going to be bothered by that.
 

oceanblue

Guest
Messages
1,383
Location
UK
Yes, I'm talking about the CGI figures, where patients reported a significant improvement.
(There are no separate figures for a 'little better'... So I'm assuming that the authors considered these results to be an insignificant change, which is why they haven't included the figures separately, but they are combined with 'no change' and a 'little worse'.)
correct

I'm not really commenting on what the researchers were bound to report...
But they appear to me to have spun the figures because they haven't reported them as a comparison to the control group... So i think it's important for us to pick up on this.

I've taken another look at this in the paper, and I see what you mean. However, I think the reason the researchers didn't quote differences is this:

With PF or Fatigue scores you can compare, say, the mean PF score of CBT with the mean PF score for SMC. You are comparing the difference of means.

However, when you're looking at the proportion of a group that meet a particular threshold, eg 61% of GET group have 'improved', i don't think it's statistically correct to measure the difference between groups. I think you can say eg the GET group improved more than the SMC group and quote a p value for this, but can't be precise about the size of difference. So while a 'net increase of 15%' is probably a good indication of the size of difference, I don't think it's statistically robust. So if this is right, the authors are probably reporting the data in the right way (even if they've changed the definition of 'improved' from the protocol).

However, I'm happy to be corrected on this if anyone knows better (Dolphin?).
 

oceanblue

Guest
Messages
1,383
Location
UK
A couple of years ago, I attended a group session of GET provided by my local NHS ME service. I didn't find it helpful, and now it turns out that, according to the results of the PACE trial, 84% of the attendees would not have benefited from it. And the 16% of patients who did benefit, would only have seen very minimal changes, according go to the study results. How can the NHS justify using CBT/GET if, in every group GET session of 20 people, only about 3 patients will see any benefit?

it did occur to me that publication of the trial was delayed unitl after NHS budget cuts had been finalised for next financial year (2011/12)...
 
Messages
5,238
Location
Sofa, UK
Well-written responses summarising all the key bullet points in short, simple, non-technical language, with detail left to the references, are going to be sorely needed. The study IS an opportunity to expose the manipulation and spin, even though the problem remains that the control of the media removes our ability to put the information in front of the general public.

But we are all too close to it. We always tend to fail to effectively summarise the truth of the matter in language that ordinary people can easily understand. This is not our fault. We have to dive into a maze of complexity and clever techniques of deception in order to work out how they pulled off each trick. To then untangle that deception and lay it bare in ways that everyone can understand is a horribly difficult task even for healthy people. But I think that achieving this is a big part of the way out of the maze we're trapped in.

Top of my head, bullet points, needing fleshing out by those with better understanding of all the technicalities...

Who were they studying? Not us!

1. The patients for this study were recruited from "CFS specialist treatment centres" which are shunned by most ME/CFS patients, and the study itself confirms just how ineffective those treatment centres are - even less effective than CBT and GET.

2. Of 3000 patients referred to the study by those treatment centres, around 75% were rejected because they were found to be sick - with other medical conditions, and with the immune and neurological symptoms that characterise the real disease. Yet those patients were diagnosed with ME/CFS and were being treated as such at the treatment centres - so the study's results themselves only apply to at most 25% of those diagnosed with ME/CFS in the UK.

3. The study's authors used their own definition of CFS, which explcitly excludes patients with the symptoms of ME/CFS, and broadens the definition so as to include many patients suffering from depression.

The study redefined success after it failed miserably to meet its original goals.

4. The authors moved the goalposts throughout the study, removing from the study the only objective physical measurements of the patients' activity levels after originally stating that such measurements would be used, and redefining 'success', 'effectiveness' and 'recovery' to fit the results they obtained.

5. The study claimed to compare 'pacing' with CBT and GET, but again the study redefined 'pacing' to mean something completely different - the study's "APT" is NOT the same as what advocates of pacing understand by the term - and the authors then used the failure of their redefined version to suggest that 'pacing' doesn't work.

The benefits reported for CBT and GET were tiny and probably represent 'wishful thinking'

6. Despite all these manipulations, none of the combinations of talk and exercise therapies studied delivered more than a 9% (?) improvement after 2(?) years of therapy, as reported by the patients. The study itself described this achievement as 'moderately effective', and accepted that the therapies did not deliver a cure.

7. Even the very small improvements claimed by the study are questionable. These assessments were measured only using questionnaires completed by the participants, and previous comparable evidence indicates that patients in this situation are more likely to say their activity levels have increased even though the objective physical measures show that the therapy had not actually increased those activity levels.

I'm sure there are a few more headlines I've missed - and lots more work ahead of us all, I'm afraid - but what I mostly want to emphasise is the need for short, bullet-point statements of the key points, in non-technical language, but with references and with rigorous accuracy and fairness. The above is just a short first draft of the points that need to be covered.

There's no need to exaggerate, attempt to score points, rant about how outrageous it is, or present the truth in the most favourable light. The facts speak for themselves. All we need to do is lay those facts out in an easily-digestible form for people who don't have the necessary medical or scientific training, with the full technical detail available for those who want to follow it up.

And then we need to figure out how on earth to get the truth in front of people, in this brave new world where the press listens only to Wessely and his mates. We could and should try to get a British newspaper to write a news story...but I'll believe that's possible only when I see it happen...
 

Dolphin

Senior Member
Messages
17,567
Dolphin, I think I love you! :D

Sorry, getting carried away here because there are some gems in that paper that expose the flaws in the PACE choice of thresholds.

This study is based on completed questionnaires form 9,332 people of working age in central England.

The survey gives SF-36 PF scores for the whole sample, for people who reported a longstanding illness and, crucially, people who did not report a longstanding illness. This last group might be the best estimation of 'healthy'. Here are the mean SF-36 scores with SDs in brackets and the threshold that would result from using the PACE formula of "mean minus 1 SD":

'Healthy': 92.5 (13.4) = 79.1
'Chronically ill': 78.3 (23.2) = 55.1
'Population*': 89 (16) = 73
*oh damn, they don't seem to give this separately, this is my guesstimation from looking at the data they do give

Since the PF scale only scores in 5 point intervals (e.g. 60,65,70) these translate as PF threshold scores as:
Healthy = 80, population = 70 or 75. PACE used 60.

slightly more complex point
As a bonus, they provided SF-36 PF scores for people who had consulted a doctor in the 2 weeks prior to completing the questionnaire. This is a pretty close approximation of the 'GP attenders' used to establish norm data for the fatigue scale. The scores are
81.6 (23) = 58.6

This shows that not only are the GP attenders substantially less well than 'healthy' people (81.6 vs 89-ish), they also have a much bigger SD, which has the effect of lowering the threshold even further (60 vs 70 or 75). Obviously this is for PF scores not fatigue scores, but it does illustrate how GP attenders differ from the normal population​
That's great, Oceanblue, hadn't had a time to look at it closely.

Of course in the original paper they refered to normative data which, even though it wasn't restricted to healthy people, wouldn't get them down to 60. They really shouldn't be allowed to change the goalposts like this.
 

Dolphin

Senior Member
Messages
17,567
Most of the cost is borne by the patients who are organising their lives around graded exercise. NHS accountants aren't going to be bothered by that.
Yes, this is a good point.
Hopefully you will eventually see a point like this in a published article. ;)
 

Dolphin

Senior Member
Messages
17,567
The question is, how do we use the evidence in PACE to turn things around? Currently I'm not sure, though letter to the Lancet will be a start. So I still think it's a big opportunity but i'm not at all sure how we make the most of it.
Ellen Goudsmit made this specific suggestion regarding the Lancet on the MEA FB page:
Has anyone not considered contacting the Lancet Ombudsman about the contiued bias re CFS?
She wrote this article before:
Editorial bias in the Lancet http://freespace.virgin.net/david.axford/lanbias1.htm

I won't be taking that one on as too busy.

I'm not sure what the overall strategy should be. But getting some letters in the Lancet should do no harm. Other people may then pick up on the issues including possibly medicaly establishment people. And then also people in the future may come across the information.
 

oceanblue

Guest
Messages
1,383
Location
UK
Of course in the original paper they refered to normative data which, even though it wasn't restricted to healthy people, wouldn't get them down to 60. They really shouldn't be allowed to change the goalposts like this.

I don't quite understand this. Do you mean in the protocol they refer to means (rather than mean - 1 SD)? I'm probably being a bit slow here.

Also, while I totally agreee they shouldn't be allowed to move the goalposts, I'm trying to find out if the new goalposts are in the wrong place, even by their new defnitions.
 

Dolphin

Senior Member
Messages
17,567
I don't quite understand this. Do you mean in the protocol they refer to means (rather than mean - 1 SD)? I'm probably being a bit slow here.

Also, while I totally agreee they shouldn't be allowed to move the goalposts, I'm trying to find out if the new goalposts are in the wrong place, even by their new defnitions.
Yes, I mean means - 1 SD.

From protocol paper
We will count a score of 75 (out of a maximum of 100) or more, or a 50% increase from baseline in SF-36 sub-scale score as a positive outcome. A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population [51,52].

50.Ridsdale L, Darbishire L, Seed PT: Is graded exercise better than cognitive behaviour therapy for fatigue? A UK randomised trial in primary care. Psychol Med 2004 , 34:37-49.

51.Jenkinson C, Coulter A, L W: Short form 36 (SF-36) Health Survey questionnaire: normative data from a large random sample of working age adults. BMJ 1993 , 306:1437-1440.
Actually the one you took the data from is the one they referenced in their original paper (51). This makes it much stronger if it is quoted.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I've taken another look at this in the paper, and I see what you mean. However, I think the reason the researchers didn't quote differences is this:

With PF or Fatigue scores you can compare, say, the mean PF score of CBT with the mean PF score for SMC. You are comparing the difference of means.

However, when you're looking at the proportion of a group that meet a particular threshold, eg 61% of GET group have 'improved', i don't think it's statistically correct to measure the difference between groups. I think you can say eg the GET group improved more than the SMC group and quote a p value for this, but can't be precise about the size of difference. So while a 'net increase of 15%' is probably a good indication of the size of difference, I don't think it's statistically robust. So if this is right, the authors are probably reporting the data in the right way (even if they've changed the definition of 'improved' from the protocol).

However, I'm happy to be corrected on this if anyone knows better (Dolphin?).

ocean, I think you might be making things over-complex here...
It would have been easy for the authors to state that 16% people benefited from GET when compared to the SMC control group.
I'm absolutely certain that they would have made the comparison if it was something that would work in their favour.
 

anciendaze

Senior Member
Messages
1,841
cost savings?

Since no one has taken my challenge to compute costs/benefits in terms of cost per patient-meter, I'll try another tack. This may have some relevance to current health-care funding debates.

My rough estimate is that this study cost about as much per patient for one year as actually paying disability. As I read it, there is no claim that even one patient was either cured or moved off disability.

The next level of analysis would focus on future costs. I don't yet see anything to suggest patients will need less therapy or medication following this year of treatment. That will require a follow up.

Basing measured improvements on subjective assessments of disturbed people looks like a great way to fall victim to the budgeting axe. I thought this bunch was at least politically savvy. Am I missing a critical part of their argument for funding treatment based on these results?
 

Dolphin

Senior Member
Messages
17,567
Since we can't develop a cost per cure from a trial that apparently failed to result in any cures which might move someone from disabled to working, what about the published objective results (with actigraph measures eliminated) ?

Could some of our experts on the intricacies of British medical accounting boil these results down into a cost per patient-meter after 52 weeks of therapy?

One possible reason this has not already been done might be because they are holding dramatic results in reserve, based on self-assessment score differences between employed and unemployed therapists.
:D
 

Dolphin

Senior Member
Messages
17,567
I've taken another look at this in the paper, and I see what you mean. However, I think the reason the researchers didn't quote differences is this:

With PF or Fatigue scores you can compare, say, the mean PF score of CBT with the mean PF score for SMC. You are comparing the difference of means.

However, when you're looking at the proportion of a group that meet a particular threshold, eg 61% of GET group have 'improved', i don't think it's statistically correct to measure the difference between groups. I think you can say eg the GET group improved more than the SMC group and quote a p value for this, but can't be precise about the size of difference. So while a 'net increase of 15%' is probably a good indication of the size of difference, I don't think it's statistically robust. So if this is right, the authors are probably reporting the data in the right way (even if they've changed the definition of 'improved' from the protocol).

However, I'm happy to be corrected on this if anyone knows better (Dolphin?).
These is what is called categorical data. One can use tests like odds ratios (or relative risks), Chi-Squared tests, etc on such data.

ETA: Indeed, in Table 5, one can see that they calculated odds ratios and there was a significant difference.
 

Dolphin

Senior Member
Messages
17,567
Well-written responses summarising all the key bullet points in short, simple, non-technical language, with detail left to the references, are going to be sorely needed. The study IS an opportunity to expose the manipulation and spin, even though the problem remains that the control of the media removes our ability to put the information in front of the general public.

But we are all too close to it. We always tend to fail to effectively summarise the truth of the matter in language that ordinary people can easily understand. This is not our fault. We have to dive into a maze of complexity and clever techniques of deception in order to work out how they pulled off each trick. To then untangle that deception and lay it bare in ways that everyone can understand is a horribly difficult task even for healthy people. But I think that achieving this is a big part of the way out of the maze we're trapped in.

Top of my head, bullet points, needing fleshing out by those with better understanding of all the technicalities...

Who were they studying? Not us!

1. The patients for this study were recruited from "CFS specialist treatment centres" which are shunned by most ME/CFS patients, and the study itself confirms just how ineffective those treatment centres are - even less effective than CBT and GET.

2. Of 3000 patients referred to the study by those treatment centres, around 75% were rejected because they were found to be sick - with other medical conditions, and with the immune and neurological symptoms that characterise the real disease. Yet those patients were diagnosed with ME/CFS and were being treated as such at the treatment centres - so the study's results themselves only apply to at most 25% of those diagnosed with ME/CFS in the UK.

3. The study's authors used their own definition of CFS, which explcitly excludes patients with the symptoms of ME/CFS, and broadens the definition so as to include many patients suffering from depression.

The study redefined success after it failed miserably to meet its original goals.

4. The authors moved the goalposts throughout the study, removing from the study the only objective physical measurements of the patients' activity levels after originally stating that such measurements would be used, and redefining 'success', 'effectiveness' and 'recovery' to fit the results they obtained.

5. The study claimed to compare 'pacing' with CBT and GET, but again the study redefined 'pacing' to mean something completely different - the study's "APT" is NOT the same as what advocates of pacing understand by the term - and the authors then used the failure of their redefined version to suggest that 'pacing' doesn't work.

The benefits reported for CBT and GET were tiny and probably represent 'wishful thinking'

6. Despite all these manipulations, none of the combinations of talk and exercise therapies studied delivered more than a 9% (?) improvement after 2(?) years of therapy, as reported by the patients. The study itself described this achievement as 'moderately effective', and accepted that the therapies did not deliver a cure.

7. Even the very small improvements claimed by the study are questionable. These assessments were measured only using questionnaires completed by the participants, and previous comparable evidence indicates that patients in this situation are more likely to say their activity levels have increased even though the objective physical measures show that the therapy had not actually increased those activity levels.

I'm sure there are a few more headlines I've missed - and lots more work ahead of us all, I'm afraid - but what I mostly want to emphasise is the need for short, bullet-point statements of the key points, in non-technical language, but with references and with rigorous accuracy and fairness. The above is just a short first draft of the points that need to be covered.

There's no need to exaggerate, attempt to score points, rant about how outrageous it is, or present the truth in the most favourable light. The facts speak for themselves. All we need to do is lay those facts out in an easily-digestible form for people who don't have the necessary medical or scientific training, with the full technical detail available for those who want to follow it up.

And then we need to figure out how on earth to get the truth in front of people, in this brave new world where the press listens only to Wessely and his mates. We could and should try to get a British newspaper to write a news story...but I'll believe that's possible only when I see it happen...
Hi Mark, Lots of good points there.

I agree that this thread could be long for somebody and people might give up reading before actually turning what they read into any action e.g. write a letter. Although I don't think it should happen with every paper, it could be argued that a separate thread could be set up with the nuggets from this thread. Perhaps somebody could go through it and either try to summarise it as you did, or simply copy and paste the interesting points. I remember this was done on a thread about the CAA where somebody collated observations about exercise.

Anyway, I don't want to land work on anybody.

I do hope that this discussions will lead to letters.
But I think we should be allowed have a discussion that flows naturally enough as it has been up to now, with on-topic points. Of course, occasionally on other lists I might be asked questions by people who haven't read the paper and if that was to happen and we had to try to answer questions from people who hadn't read the paper, that would be a lot of work and add a lot of posts. But if people have read the paper and are still a bit stuck, questions might be ok. I would encourage people to read the protocol as well to get an understanding of it: http://www.biomedcentral.com/1471-2377/7/6
and if they want to ask about questionnaires, to download this file and look at the back for the questionnaires: https://www.yousendit.com/download/T2pGd0VBaFI4NVh2Wmc9PQ
 

SilverbladeTE

Senior Member
Messages
3,043
Location
Somewhere near Glasgow, Scotland
Since we can't develop a cost per cure from a trial that apparently failed to result in any cures which might move someone from disabled to working, what about the published objective results (with actigraph measures eliminated) ?

Could some of our experts on the intricacies of British medical accounting boil these results down into a cost per patient-meter after 52 weeks of therapy?

One possible reason this has not already been done might be because they are holding dramatic results in reserve, based on self-assessment score differences between employed and unemployed therapists.

1) most of our politicians haven't the slightest clue about science, and many have never had a normal job either (ie one that requires actual brain work and repsonsibility), lol

2) Psychs...know how people tick, how to put things in just the right way, the spin, to make it all sound good!
you'd be bloody surprised how easy it is to convince many people that crap is gold, if you spin it the right way....
 

Dolphin

Senior Member
Messages
17,567
Since no one has taken my challenge to compute costs/benefits in terms of cost per patient-meter, I'll try another tack. This may have some relevance to current health-care funding debates.

My rough estimate is that this study cost about as much per patient for one year as actually paying disability. As I read it, there is no claim that even one patient was either cured or moved off disability.

The next level of analysis would focus on future costs. I don't yet see anything to suggest patients will need less therapy or medication following this year of treatment. That will require a follow up.

Basing measured improvements on subjective assessments of disturbed people looks like a great way to fall victim to the budgeting axe. I thought this bunch was at least politically savvy. Am I missing a critical part of their argument for funding treatment based on these results?
They do say:
We plan to report relative cost-eff ectiveness of the
treatments, their moderators and mediators, whether
subgroups respond diff erently, and long-term follow-up
in future publications.
We will say how this happens.

In the past, one has documents like the NHS Plus Guidelines (for employers, occupational physicians, etc.) http://www.nhsplus.nhs.uk/providers/images/library/files/guidelines/CFS_guideline.pdf .
Trudie Chalder was involved in that and Peter White and Michael Sharpe were the two external assessors.

It's top key finding was
Cognitive behavioural therapy and graded exercise therapy have been shown to be
effective in restoring the ability to work in those who are currently absent from work.
I don't think I ever read the Ross review but looking at the papers, it looks like this was claimed based on improvements in the physical functioning scale.

Then in this study, they say that physical functioning and fatigue are in the normal range for 30% of the people following GET and CBT so they may claim this again as evidence about restoring "the ability to work". This is smoke and mirrors stuff of course.

It is quite common in the UK for pressure to be put on people with ME/CFS to do GET and/or CBT based on GET before they will be improved for a disability payment. I have seen indirect evidence that Peter White for one is involved in this.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Then in this study, they say that physical functioning and fatigue are in the normal range for 30% of the people following GET and CBT so they may claim this again as evidence about restoring "the ability to work".

Well if they expect patients with scores of 50-60 to work, then logically, they should also abolish the aged pension. ;)