• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A cost effectiveness of the PACE trial

user9876

Senior Member
Messages
4,556
One indication of their failings

Receipt of benefits due to illness or disability increased slightly from baseline to follow-up

But the CBT groups benefit claims rose more slowly!

A surprising number of people not on benefits at the start.
 

Dolphin

Senior Member
Messages
17,567
Yet another measure is introduced that of eQol

http://www.euroqol.org/fileadmin/us..._International_Perspective_based_on_EQ-5D.pdf

Seems to give some explanation and population norms. As with other results they quote a single figure for a multidimentional scale.
I have the longer PACE Trial protocol. It lists the 6 parts of the Euroqol EQ5d: 5 of them are the five on page 13 on that link.
The sixth part is:

Compared with my general level of health over the past 12 months, my health today is

Better

Much the same

Worse

 

Wonko

Senior Member
Messages
1,467
Location
The other side.
So SMC is the most cost effective and successful treatment regime but if another option has to be added (as this paper seems to assume is the case) then CBT with SMC is the best? Although it's only so if you stop family members etc from providing care and hence removing the assumed cost of this to skew the figures.

At least it says GET isnt particularly effective, costwise or otherwise.

If I had the energy I could get quite annoyed by all this BS
 

Dolphin

Senior Member
Messages
17,567
With regard to the informal care costs, I think some, or possibly even all (who knows), of the difference may be due to the different instructions given to participants and their families.

It would be good if anyone who had the time could look up what was said in the manuals.

I believe the families of the CBT and GET participants were encouraged not to be as supportive. And, although I'm not as sure of this, I think CBT and GET participants may have been encouraged to look for less help. The houses of the CBT and GET participants could be less clean (for example), but this wouldn't show up in the figures.

The manuals are here: http://www.pacetrial.org/trialinfo/
 

Esther12

Senior Member
Messages
13,774
I'm not sure what I think of this paper as I'm not familiar with how they've calculated QALY. That their patients have ended up more dependent on benefits from the state would seem to indicate that their therapies aren't a colossal success.

How can they say that any of their treatments are cost effective when they've got no control group (although SMC seemed to have originally been intended as a control group, they're now acting as if the improvements in questionnaire scores there are the result of the expert care provided)? Are they just ignoring the tendency for subjective questionnaire score to improve in RCTs regardless of treatment type? I wish they'd had a homoeopathy arm.

With regard to the informal care costs, I think some, or possibly even all (who knows), of the difference may be due to the different instructions given to participants and their families.

It would be good if anyone who had the time could look up what was said in the manuals.

I believe the families of the CBT and GET participants were encouraged not to be as supportive. And, although I'm not as sure of this, I think CBT and GET participants may have been encouraged to look for less help. The houses of the CBT and GET participants could be less clean (for example), but this wouldn't show up in the figures.

The manuals are here: http://www.pacetrial.org/trialinfo/

The same could be true for use of alternative medical care, as I know other forms of CBT/GET encourage people to stop use, so as to avoid patients viewing their improvements as being the result of treatments other than CBT/GET.

Costs and QALYs were available for 570 (89%) participants (ranging from 85% GET to 93% SMC).

That's a 10% difference between GET and SMC. I also wonder if GET had more drop outs, and whether this would have led to the 'cost' of these treatments being viewed as lower, as the number of sessions would be lower.

The cost per hour of therapy was £110 for CBT and £100 for APT and GET. The cost of SMC was based on the cost per hour of consultant physician time in face-to-face contact with patients, which was £169 [12].

The mean (SD) cost per patient of treatment was:

APT: 1040 (275) CBT: 1198 (366) GET: 935 (300)

Mean hours of treatment:

APT: 10.4 CBT: 10.89 GET: 9.35

One important thing would be whether they examined whether their missing data was more likely to be from those patients who stopped therapy. If those patients finding a particular treatment unhelpful were most likely to stop, then this could have a dramatic impact on improving reported cost effectiveness as them stopping treatment would lower the average cost of treatment, and their missing data could also be filled in with data from those patients who found the treatment more helpful, kept at it, and filled in forms at the end.
 

Esther12

Senior Member
Messages
13,774
Just looking at the use of the 'specialist medical care' (control):

Each SMC hour supposedly cost (a lot more than all the other treatments) £169

Mean SMC cost per group:

SMC alone: 358 APT: 227 CBT: 230 GET: 213

Average number of hours:

SMC alone: 2.1 APT: 1.3 CBT: 1.3 GET: 1.3

I'm surprised the SMC only patients had so little 'treatment'.

One other thing - what about the 'cost' to patients of following the various treatments? APT sounds like a nightmare to follow, as does GET, and CBT involved 'homework'. Shouldn't these social costs be accounted for somehow?

edited to change 'sessions' to 'hours'
 

Dolphin

Senior Member
Messages
17,567
Just looking at the use of the 'specialist medical care' (control):

Each SMC session supposedly cost (a lot more than all the other treatments) £169

Mean SMC cost per group:

SMC alone: 358 APT: 227 CBT: 230 GET: 213

Average number of sessions:

SMC alone: 2.1 APT: 1.3 CBT: 1.3 GET: 1.3

I'm surprised the SMC only patients had so little 'treatment'.
Those figures aren't fully correct. It's £169 per hour, so those would be the numbers of hours they had.

We already had info on the number of sessions in Table 2 of the White et al (2011) Lancet paper:


Specialist medical care sessions attended

Median (Interquartile range i.e. 25th - 75th percentiles)

Adaptive pacing therapy (n=159): 3 (3–4)
Cognitive behaviour therapy (n=161) 3 (3–4)
Graded exercise therapy (n=160) 3 (3–4)
Specialist medical care alone (n=160) 5 (3–6)



One other thing - what about the 'cost' to patients of following the various treatments? APT sounds like a nightmare to follow, as does GET, and CBT involved 'homework'. Shouldn't these social costs be accounted for somehow?
I think this is an important point, especially if there's activity substitution.
 

Esther12

Senior Member
Messages
13,774
Thanks D. (I thought I'd already replied to this, but it seems not).

I've forgotten masses of PACE stuff, and did rather feel my eyes glazing over with this new paper. Thanks for picking me up.
 

Esther12

Senior Member
Messages
13,774
I've found that with a lot of psychosocial CFS papers, that if you strip them back to the raw data, the researchers conclusions fall apart, eg: what little data we had with PACE, the actometer paper that acted as if no improvement in levels of activity showed how wonderful CBT was, etc.

I'm not at all familiar with QALY, so this could be a totally inappropriate thing to do, but I've just picked out some figures from their tables (edit - in a not terribly useful way... see Dolphin's post below):


QALY accrued:

APT: 0.43 CBT: 0.60 GET: 0.57 SMC alone: 0.52

Cost of Therapy + SMC:

APT: 1267 CBT: 1428 GET: 1148 SMC alone: 358

Cost of therapy and SMC divided by QALY accrued it resulted in:

APT: 2946.5 CBT: 1913 GET: 2014 SMC alone: 688

By this crude measure, their 'control group' seems to be about three times as cost effective as their treatments.

I really should stop this... I'm not up to this today! Hopefully others will have fun digging in to this while I laze about.

Does anyone have prior experience on calculating QALY in the ways done here?
 

Dolphin

Senior Member
Messages
17,567
I've found that with a lot of psychosocial CFS papers, that if you strip them back to the raw data, the researchers conclusions fall apart, eg: what little data we had with PACE, the actometer paper that acted as if no improvement in levels of activity showed how wonderful CBT was, etc.

I'm not at all familiar with QALY, so this could be a totally inappropriate thing to do, but I've just picked out some figures from their tables.


QALY accrued:

APT: 0.43 CBT: 0.60 GET: 0.57 SMC alone: 0.52

Cost of Therapy + SMC:

APT: 1267 CBT: 1428 GET: 1148 SMC alone: 358

Cost of therapy and SMC divided by QALY accrued it resulted in:

APT: 2946.5 CBT: 1913 GET: 2014 SMC alone: 688

By this crude measure, their 'control group' seems to be about three times as cost effective as their treatments.

I really should stop this... I'm not up to this today! Hopefully others will have fun digging in to this while I laze about.

Does anyone have prior experience on calculating QALY in the ways done here?
I not an expert on QALYs, but based on what I can surmise/guess, that's not correct.

Firstly you made a little typo: it should be 0.53 not 0.43.

What one is interested in is how many QALYs a therapy adds i.e. what would the quality of life for an average person be over the course of a year and how much is it improved on a therapy.

Sometimes generally I think this sort of calculation is done by subtraction, but instead they seem to have adjusted the final scores based on how they vary with baseline scores.

The numbers with subtraction are:

APT (+SMC): change from 0.48 to 0.53 = 0.05 added

CBT (+SMC): change from 0.54 to 0.60 = 0.08 added

GET (+SMC): change from 0.52 to 0.57 = 0.05 added

SMC: change from 0.50 to 0.52 = 0.02 added

Then to see how much better APT, CBT and GET are over SMC alone, one subtracts 0.02 from 0.05, 0.08 and 0.05.

However using the method they used of adjusting based on baseline values the QALY differences that APT, CBT and GET give are (respectively) 0.0149, 0.0492 and 0.0343 (top line, table 6).

Then divide the extra costs from APT, CBT and GET, to see how much it costs to get a QALY.

The important point about all this is that QALY scores are calculated by questionnaires which can be subject to a self-report biases in terms of people saying how impaired or otherwise they are.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
I agree with that point Dolphin; I think one of the things to do here is to emphasise the distinction between subjective, questionnaire-based measures (which appear to show small improvements on average) and objective measures (some of which have been conspicuously removed from the protocol, the rest of which show no improvement). The authors themselves state that the objective measures of benefits and work levels showed no improvement, and they still haven't released the raw data but make vague and misleading comments about their interpretation of it. I think those are key points to highlight and it's important to react swiftly to any press coverage and get this analysis to any press contacts.

I just wrote up this reaction; a bit rough and ready and needs some references adding and some details checking and tidying, but I'm too busy right now to do that - if it's of use to anyone please feel free to modify and use it as you wish...



Although the PACE authors assert the effectiveness and cost-effectiveness of their own treatments in this paper, the PACE results on which their publications are based reveal only interpretations of unreleased data concerning subjective measures of improvement, not raw data nor data based on objective measures. As the authors acknowledge in the Limitations section of their latest paper:

"we used the EQ-5D to generate QALY values. This is a recommended method in England, but the sensitivity of the measure in relation to changes in clinical measures in the CFS area has not yet been established."

The EQ-5D is simply a measure based on asking patients (who in the case of CBT have received a year of coaching on taking a positive attitude) how well they are doing - it is designed for self-completion by respondents and takes only a few minutes to complete. Freedom of Information requests have not succeeded in obtaining details of the meetings at which it was decided to drop the use of the most useful objective measure - actometers to measure the actual activity of the patients - from the study protocols, and Freedom of Information requests seeking access to the raw data for the other objective measures like benefit take-up and lost work time have been similarly rebuffed.

Parliamentary answers have clarified that the reason for the refusal of the requests for public access to the raw data - still unavailable a year after the original publication despite the MRC's policy on Data Access - was the forthcoming publication by the authors of further findings. So the following vague hints in this latest paper as to the nature of this data (which was publicly-funded but is still privately-held) are of particular interest:

"Receipt of benefits due to illness or disability increased slightly from baseline to follow-up...the figures at follow-up were similar between groups."

"...with the exception of a difference between CBT and APT, there were no significant differences in either lost work time or benefits between the treatments during follow up. In fact, benefits increased across all four treatments."

In the absence of the raw data, one might dwell on puzzles such as how there can be "a difference" (of unspecified nature) between CBT and APT on these measures, and yet "no significant differences" between the other treatments. No "significant" difference between APT and GET, no "significant" difference between GET and CBT, but "a difference" between APT and CBT? One must presume that the sum of 2 or 3 "insignificant" differences can represent "a difference", but some actual numbers would be more helpful. Statements like these explain why so many patients are keen to see the actual data on which the authors' commentary is based.

But perhaps it would be more useful to focus on the notable findings buried away in the above quotes, which the authors choose not to emphasise in their conclusions: that with all the treatment approaches, including CBT and GET, receipt of benefits increased slightly during the trial, and no significant differences were found in either lost work time or benefits for any of the treatments. So all of the treatments were unsuccessful both in terms of returning patients to work and in terms of reducing their levels of benefits - despite the CBT patients' slightly more positive answers on questionnaires.

From what little we can determine of the still-unreleased objective measures used in the PACE trial, all we can be sure of is that this data shows no improvement in any objective measures from any of the treatments employed. Patients will continue to seek access to the study data, and their scepticism will continue regarding the real-world value of small increases in the positivity of CBT-trained patients when they complete subjective questionnaires.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia

user9876

Senior Member
Messages
4,556
The cost savings were due to a reduction in medical utilisation, but more people were on benefits and no one was cured, I dare say these are not long term reductions in costs!
I know that after seeing a doctor we often think what was the point of that - or sometime much more angry. Hence we only go to the doctors when we feel well enough. With my child it is a case of trying to manage doctors so that they don't do stupid things (they are harder to avoid).

The point being if someone has had a series of sessions telling them they are not ill just have a maladaptive behaviour or even just helping them cope then maybe they see little point in seeing the doctor.
 

user9876

Senior Member
Messages
4,556
I not an expert on QALYs, but based on what I can surmise/guess, that's not correct.

Firstly you made a little typo: it should be 0.53 not 0.43.

What one is interested in is how many QALYs a therapy adds i.e. what would the quality of life for an average person be over the course of a year and how much is it improved on a therapy.

Sometimes generally I think this sort of calculation is done by subtraction, but instead they seem to have adjusted the final scores based on how they vary with baseline scores.

The numbers with subtraction are:

APT (+SMC): change from 0.48 to 0.53 = 0.05 added

CBT (+SMC): change from 0.54 to 0.60 = 0.08 added

GET (+SMC): change from 0.52 to 0.57 = 0.05 added

SMC: change from 0.50 to 0.52 = 0.02 added

Then to see how much better APT, CBT and GET are over SMC alone, one subtracts 0.02 from 0.05, 0.08 and 0.05.

However using the method they used of adjusting based on baseline values the QALY differences that APT, CBT and GET give are (respectively) 0.0149, 0.0492 and 0.343 (top line, table 6).

Then divide the extra costs from APT, CBT and GET, to see how much it costs to get a QALY.

The important point about all this is that QALY scores are calculated by questionnaires which can be subject to a self-report biases in terms of people saying how impaired or otherwise they are.

The EQ-5D group seem to suggest using a 5 digit number (for the 5 dimentions) and say that you cann't do arithmetic on it since the coding of 1,3 have no arithmetic meaning. This seems a fairly obvious statement to me.

What they have done is then added a further stage in their processing based on a model described in

A social tariff for EuroQol: results from a UK population survey
http://www.york.ac.uk/media/che/documents/papers/discussionpapers/CHE Discussion Paper 138.pdf

As a way of combining these scores. It is better than simply adding up the values like they do in the fatigue scale.

As far as I can tell the EQ-5D scale gives a 5 digit number (say 11111 for full health) which is just a coding of the results of the 5 questions (with answers between 1 and 3). In combining them they basically did a survey of the (healthy?) population where each person was given a set of 13 states (for example. 11323) and asked to select a length of time in full health (11111) that they regard as worth the same as 10 years in the target state.They also have a worse than death version where the choice is between dying and spending a length of time x in the target state followed by 10-x years in a healthy state. (The more time required in the healthy state the worse the target value).

They then have a scoring system based on 1 being healthy and 0 being dead and x/10 being equivalent good health years for the state. States rated worse than death come with the (x/10)-1 formula

They collected information over 45 of the possible 245 different states from the EQ5d survey.

They then model the results of the survey with various models and chose one that reduced the error (least squared) between survey results and the model. This didn't take account of interactions between the different dimentions. Hence they came up with the following scoring system:

Basically you start with a score of 1 and subtract values:
-0.081 (where there is any move away from the healty score)
Mobility
Level 2 -0.069
Level 3 -0.314
Self care
Level 2 -0.104
Level 3 -0.214
Usual Activity
Level 2 -0.036
Level 3 -0.094
Pain/discomfort
Level 2 -0.123
Level 3 -0.386
Anxiety/depression
Level 2 -0.071
level 3 -0.236
Any value of 3 -0.269

This gives the basic score. I'm not sure what this means in terms of summary stats (mean, std vs median, percentiles or mode) or in terms of doing arithmetic on the results. It all seems quite arbitary to me.

They also quote various errors between the model and survey scores (between around 0.003 and 0.06) which means that the model does not perfectly represent the views given. The error depends on particular states (probably worth an analysis given the very small changes).

In terms of the PACE paper I wish they would give raw data (i.e. the EQ-5D scores) rather than some subjective model. I will look further into the model they use to process the results to try to understand whether it is widely accepted. It does seem to represent what healthy people would worry about giving up rather than what sick people would value gaining.
 

Esther12

Senior Member
Messages
13,774
Ta Dolphin and others... I thought I could be misunderstanding what different types of data represent.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Well, I hadn't let the PACE Trial get me down until I saw the BBC article today:
http://www.bbc.co.uk/news/health-19076398

But maybe I've just been kidding myself that the truth would out in the end.
This sort of rubbish coming from the BBC has a danger of turning me into a militant patient which, ironically, would only help to prove that Wessely's cynical portrayal of ME patients is correct!

A quick google search shows a load of articles regurgitating exactly the same stuff:
http://www.google.co.uk/search?q=Long-term psychiatric and exercise treatments for chronic fatigue syndrome are good value for money, a study has found&sourceid=ie7&rls=com.microsoft:en-gb:IE-ContextMenu&ie=&oe=&rlz=1I7GZAZ_enGB362

Haven't the BBC heard from us all enough to avoid regurgitating the stuff that comes from the PACE Trial authors? (Clearly not.)
The article feels like a deeply insulting slap in the face.

And how do the psychiatric lobby manage it? Where do they get their resources from, in order to manage this sort of slick publicity so supremely? Is it public funding?
We've all been trying for a year and a half to get one single journalist to expose the PACE Trial for what it is, but have failed. Whereas the psychiatric lobby manage to cover the news media with articles, within one day of publishing their latest scam.

It's one thing to have the psychiatrists persistently promoting their misinformation, but when the media conspires with them, it really feels tough.

I'm feeling a little defeated today, by all of this,

Bob

:(


PS: I haven't read any of the actual paper yet.
 

Sam Carter

Guest
Messages
435
With regard to the informal care costs, I think some, or possibly even all (who knows), of the difference may be due to the different instructions given to participants and their families.

It would be good if anyone who had the time could look up what was said in the manuals.

I believe the families of the CBT and GET participants were encouraged not to be as supportive. And, although I'm not as sure of this, I think CBT and GET participants may have been encouraged to look for less help. The houses of the CBT and GET participants could be less clean (for example), but this wouldn't show up in the figures.

The manuals are here: http://www.pacetrial.org/trialinfo/

There's this from p95 of the CBT Patients Manual. (There might be more elsewhere, I've just had a skim.)

""""""
The "wrong" kind of social support

This may seem a contradiction in terms! The examples below illustrate how the wrong kind of support can make it more difficult for you to move forward for the following reasons:

• If you have a very supportive family member (partner, parent or child) who is used to doing everything for you, it may be difficult for you to increase your activity levels. Your relative may feel that they have your best interest at heart and discourage you from doing more. They may have difficulty accepting that in order to make progress, you need to do things at regular times even if you are feeling very fatigued. If family members have been your "carer" during your illness, they can sometimes feel that they no longer have a role when you are getting better which can sometimes lead them to be critical of your CBT programme or suggest that you are making yourself worse. This may then lead you to question the validity of the programme and deter you from persevering with it particularly when you have a lot of symptoms.
""""""

From p102 of the GET Patients Manual.

""""
What happens if I don't like exercise?

No problem. The important thing to know is that you can chose (sic) any form of activity - for
example DIY, household jobs, craft work or gardening.

""""
(ie. if you were being encouraged to count housework / chores as part or all of your 'exercise' routine this could affect how you report your need for 'informal care'.)