• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A cost effectiveness of the PACE trial

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
In the Chicago population based study, 40.6% were employed full time, 12.5% part time and the rest were unemployed/receiving disability income or retired. So perhaps more like 60-65% of CFS patients in general. Keeping in mind that the main 'improvement' was lower symptom reporting, rather than improved activity levels or neuropsychiatric testing.

Also keeping in mind that GET as delivered by these clinics still has an underlying positive cognitive focus (eg patients are more optimistic) and it is this that mediates the change in questionnaire reporting in both arms. The APT focus was more realistic hence why the reporting looked poorer.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
"But Andrew Lloyd, who runs the fatigue section of the Lifestyle Clinic at the University of NSW, believes that PACE is representative of at least three-quarters of ME-CFS patients in Australia, namely young or middle-aged adults who are not housebound."

http://sacfs.asn.au/news/2011/03/03_28_putting_exercise_through_its_paces.htm

I added a comment to this:

Two things in this article are seriously counter-factual. The first is the claim that about one third were able to return to normal lives. They were instead classified as "normal" in a highly technical sense of the world. The meaning of "normal" in this paper was moderately disabled or better. It was possible to start the study as moderately disabled, get worse, and come out as "normal". Don't take my word for it - read the paper not the press release.

Second, Sharpe's comments about the Oxford criteria disquise its relevance. The vast majority of people using the Oxford criteria are psychiatric researchers in the UK and western Europe. Its a UK based definition, and substatially different from the world standard. The primary standard since 1994 has been the Fukuda definition. In recent times this has been overshadowed by the Canadian Consensus Criteria or CCC. CCC and Oxford are very different.

Treatment studies outside of the UK and western Europe do not use the Oxford definition. There are are researchers inside the UK and in western Europe who don't use the Oxford criteria either. It is not the accepted international standard.

The recent paper on the cost effectiveness of CBT and GET does show something objective that is found in other studies. Work participation declined. This is in keeping with findings in Belgium and also with every study using objective markers (typically an actometer) of functional capacity. CBT and GET either do not improve functional capacity or cause a decline.
 

Dolphin

Senior Member
Messages
17,567
I added a comment to this:

Two things in this article are seriously counter-factual. The first is the claim that about one third were able to return to normal lives. They were instead classified as "normal" in a highly technical sense of the world. The meaning of "normal" in this paper was moderately disabled or better. It was possible to start the study as moderately disabled, get worse, and come out as "normal". Don't take my word for it - read the paper not the press release.

Second, Sharpe's comments about the Oxford criteria disquise its relevance. The vast majority of people using the Oxford criteria are psychiatric researchers in the UK and western Europe. Its a UK based definition, and substatially different from the world standard. The primary standard since 1994 has been the Fukuda definition. In recent times this has been overshadowed by the Canadian Consensus Criteria or CCC. CCC and Oxford are very different.

Treatment studies outside of the UK and western Europe do not use the Oxford definition. There are are researchers inside the UK and in western Europe who don't use the Oxford criteria either. It is not the accepted international standard.

The recent paper on the cost effectiveness of CBT and GET does show something objective that is found in other studies. Work participation declined. This is in keeping with findings in Belgium and also with every study using objective markers (typically an actometer) of functional capacity. CBT and GET either do not improve functional capacity or cause a decline.
Good response.

One (small?) correction: re: work participation, in the McCrone study, work participation as measured by employment days lost improved a little (see Table 2, noting that the figures in the top half are for 6 months). However, "There was no clear difference between treatments in terms of lost employment", so the SMC element might have caused the slight improvements and/or simply the passage of time (a lot of these people aren't ill that long compared to a lot of us now and were diagnosed even less).
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Yes Dolphin, I was aware of that, but that just gets me on to how pathetic SMC and APT are as well. If they had real pacing, now that would be something to compare it to. However exit work participation is still worse than entry, is it not? Also the Belgian study showed decreased work particpation. The only one that doesn't, arguable, is the Dutch school study with increased school participation - but there are problems with that as we know.Bye, Alex

PS I did consider getting into the details of the newer paper, but I thought it would make my comment far too long. Feel free to add a follow up.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Great comment Alex.

Considering the overall spin of the paper, it seems like a relatively minor detail but, yes, Dolphin is correct to say that lost employment/production costs improved in all categories.
In fact, the improvements were greater for CBT and GET than SMC.
But the paper says that the difference isn't significant.


On an interesting related aside, I've just had a look at Table 2, which I haven't studied closely before, and if I'm reading the chart correctly, the change in proportion of participants who lost days of work (as opposed to lost employment costs) was as follows (unadjusted for baseline):

APT (80/86) = 6 percentage points increase in numbers of participants losing days of work.
CBT (84/84) = no change
GET (83/86) = 3 percentage points increase
SMC (86/89) = 3 percentage points increase

So there was very little difference here either.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Bob, Dolphin:

Are we agreed that benefits increased across the board though? (Table 4)

So it is likely I was thinking of that and not employment. This was a memory problem - one of the reasons why I don't do real science any more, and why I am having to invent a whole new methodology to write my book. Alternatively I was making an inference that benefits imply work attendance, or made that inference in the past and then remembered that. How come benefits received went up, and work attendance did not change for CBT? Has anyone figured that out? It could be important. Could it be because, as I have suggested before, they simply had more benefits in the pipeline and it took longer to receive them?

However, I went back and looked at employment again. There was an increase in percentage values for everything except CBT. Absolute numbers cannot be used due to the numbers of dropouts. An important question not being asked is what happened with dropouts. It could be reasonably expected they might have even more days off work, depending on reasons for dropping out. Some of course will have dropped out for reasons that have nothing to do with the trial.

But have a look at the math. The pre-randomization percentage for CBT days lost is 83.85%. The post-randomization is 84.13%. There is an increase it just disappears after rounding.

One question is: am I right am presuming the decline in numbers was due to dropping out? Was it instead due to poor records or some other reason? Because that might lead to other questions.

I am adding an additional comment to the article that I posted to, correcting the employment days issue. I don't like posting incorrect information, it can be spun later. This is what I am proposing to add, does anyone think it fails to correct the information?:

"Errata: The data on from the cost effectiveness trial is slightly different than what I said. Receipt of benefits goes up across the board, including for CBT. Days lost from work increase for everything except CBT, where it remains the same. However, this is still consistent with either a failure to increase functional capacity, or a worsening of functional capacity. Its not an improvement."

Does anyone feel that more needs to be added, perhaps about significance? That is, to reflect that the changes were not significant. I do not think this is necessary, particularly since the comment is on an article about the original paper not the second one.

Bye, Alex
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
I'm pretty sure that Table 2 shows a small increase in the proportion of people losing employment between pre- and post, though this is probably not statistically significant, and a decrease in lost hours. Table 3 shows that the cost of lost employment decreases for all groups (i.e. an improvement); it decreases more for CBT & GET than for SMC, though presumably this is not significant. Don't forget that the pre figures are for 6 months, post figure for 12.

I agree the benefits situation deteriorates.

alex3619: yes lost employment falls while benefits increase and I too suspect there is pipeline effect. It would be good to see the raw data on exactly which benefits changed, as this might throw more light on the situation.
 

user9876

Senior Member
Messages
4,556
Hi Bob, Dolphin:

Are we agreed that benefits increased across the board though? (Table 4)

So it is likely I was thinking of that and not employment. This was a memory problem - one of the reasons why I don't do real science any more, and why I am having to invent a whole new methodology to write my book. Alternatively I was making an inference that benefits imply work attendance, or made that inference in the past and then remembered that. How come benefits received went up, and work attendance did not change for CBT? Has anyone figured that out? It could be important. Could it be because, as I have suggested before, they simply had more benefits in the pipeline and it took longer to receive them?

However, I went back and looked at employment again. There was an increase in percentage values for everything except CBT. Absolute numbers cannot be used due to the numbers of dropouts. An important question not being asked is what happened with dropouts. It could be reasonably expected they might have even more days off work, depending on reasons for dropping out. Some of course will have dropped out for reasons that have nothing to do with the trial.

But have a look at the math. The pre-randomization percentage for CBT days lost is 83.85%. The post-randomization is 84.13%. There is an increase it just disappears after rounding.

One question is: am I right am presuming the decline in numbers was due to dropping out? Was it instead due to poor records or some other reason? Because that might lead to other questions.

I am adding an additional comment to the article that I posted to, correcting the employment days issue. I don't like posting incorrect information, it can be spun later. This is what I am proposing to add, does anyone think it fails to correct the information?:

"Errata: The data on from the cost effectiveness trial is slightly different than what I said. Receipt of benefits goes up across the board, including for CBT. Days lost from work increase for everything except CBT, where it remains the same. However, this is still consistent with either a failure to increase functional capacity, or a worsening of functional capacity. Its not an improvement."

Does anyone feel that more needs to be added, perhaps about significance? That is, to reflect that the changes were not significant. I do not think this is necessary, particularly since the comment is on an article about the original paper not the second one.

Bye, Alex

The bit in table 4 I didn't get is the second set of figures for days of work lost. I'm assuming that the first set is the number of people who have lost days of work through ilness and the second set is the mean and std of days lost my problem is this seems very high.

As I see it the average days lost (per month, the first bit of the table is over 6 months and the second b) has reduced giving figures

APT 13.5 After 12.3
CBT 14.2 After 12.6
GET 13.8 After 12
SMC 12.6 After 11.8

Strangely for these columns they are using the original number of patients rather than the reduced numbers that they seem to have data with after the trial.

I was wondering have they mislabeled these values as days when they are hours or do they assume people would work a 5 day week and hence count people who are not as having lost these days whether they have a job or not.

Its also not clear to me what different benefits mean. Is ESA (employment support alowance) counted as a disability benefit or is it an income benefit.There can be a significan lag here due to the need to appeal the original assessments. Do they count tax credits as income benefits due to people working part time this then becomes complicated by the overall family income and number of children.

I assume these figures are all prior to the resession hence we can discount the state of the job market.

I'm surprised by how little definition there is for the various numbers they give. I guess I should probably go back to the trial protocol and see what questions are being asked.
 

Dolphin

Senior Member
Messages
17,567
I'm pretty sure that Table 2 shows a small increase in the proportion of people losing employment between pre- and post, though this is probably not statistically significant, and a decrease in lost hours. Table 3 shows that the cost of lost employment decreases for all groups (i.e. an improvement); it decreases more for CBT & GET than for SMC, though presumably this is not significant. Don't forget that the pre figures are for 6 months, post figure for 12.

I agree the benefits situation deteriorates.

alex3619: yes lost employment falls while benefits increase and I too suspect there is pipeline effect. It would be good to see the raw data on exactly which benefits changed, as this might throw more light on the situation.
Seems a fair summary except this (small?) point. Re: Table 3:
Lost production costs were significantly higher for APT compared to CBT (difference £1279, 95% CI £141 to £2772).
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
If I presume the days lost are not adjusted for period and are absolutes, we get figures that don't make sense. The pre-randomization period had less than one sick day in sixth months per patient? That could be why I am making the wrong presumptions. The unit is days, not days per patient. Days per patient cannot make sense anyway. What if its not days per patient, but the number of patients who lost days, number of days unspecified? It seems I am misreading the tables. The lack of clarity is a problem.

Something that snuck in though is that SMC lost employment, in terms of pounds adjusted for year, still goes up (Table three, compare adjusted dollars, even allowing for double time period). Its not just about time periods or absolute numbers. It cannot be days per patient on average, that would give nonsense results. It cannot be just days lost, for all patients, without it making a mockery of the whole study (though that would not surprise me). It could be that the dollar values given are erroneous, at least for SMC. It could also be that SMC made things worse employment wise, but this is not reflected in the days values. Could it be due to calculation slips due to birthdays, with increased wages attributed in the 12 month period?

Total days lost for all patients, on a monthly basis, makes more sense. It does not appear to be right though. It would still only translate to maybe five days sick in six months, which is still unreasonable if these patients are classed as having CFS. That would imply hese patients were not particularly disabled on average.

If we presume the number given is the number of patients who lost days, in absolute terms, NOT the number of days, then the only data we can infer improvement/worsening from is the cost data. This makes the most sense to me. This may also mean the doubling of the time period is irrelevant. One of the implications of this is that only some patients have not lost work days, which is conceivable given either mild CFS or misdiagnosis. This is more or less what Simon was talking about in the previous post.

I am going back over this thread. Obviously I missed something the first time around. Many of you probably got all this already but I have been working on other things. Time I corrected that.

Bye, Alex
 

Dolphin

Senior Member
Messages
17,567
So it is likely I was thinking of that and not employment. This was a memory problem - one of the reasons why I don't do real science any more, and why I am having to invent a whole new methodology to write my book.
My memory is affected also. However, this isn't like an exam situation - one can check things. Having to check things a couple/few times does tend to mean I eventually start to remember findings.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
alex3619

Hi Alex, with regards your proposed correction...

You might be confusing 'days lost from work' with 'numbers of participants who lost days from work'.

So where you say: "Days lost from work increase for everything except CBT", it might not be accurate, as I think the 'days lost from work' corresponds directly with 'lost employment costs'.

I can't look at it in more detail right now... I've got to go out for a while.

(I'm not sure that you need to post a correction anyway... No one except a few geeks like us, are going to notice such a minor wording error... Not many people will study this paper in detail like we have... And if they can't see the major flaws in the reporting of the PACE Trial, that you have rightly pointed out, then what hope do they have of spotting anything wrong in what you said?)
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
If I presume the days lost are not adjusted for period and are absolutes, we get figures that don't make sense. ..

I am going back over this thread. Obviously I missed something the first time around. Many of you probably got all this already but I have been working on other things. Time I corrected that.
Good luck!

Table 2 gives mean contacts per patient with the note e that for employment this means: "days lost from work" so that's days lost from work, per patient, per period. The data does take a fair bit of getting to grips with as the presentation is far from straightforward, not least the 6 months vs 12 months issue.
Seems a fair summary except this (small?) point. Re: Table 3: [APT]
Fair point, but I ignore all APT data as a matter of course!
 

Dolphin

Senior Member
Messages
17,567
The bit in table 4 I didn't get is the second set of figures for days of work lost. I'm assuming that the first set is the number of people who have lost days of work through ilness and the second set is the mean and std of days lost my problem is this seems very high.

As I see it the average days lost (per month, the first bit of the table is over 6 months and the second b) has reduced giving figures

APT 13.5 After 12.3
CBT 14.2 After 12.6
GET 13.8 After 12
SMC 12.6 After 11.8

Strangely for these columns they are using the original number of patients rather than the reduced numbers that they seem to have data with after the trial.

I was wondering have they mislabeled these values as days when they are hours or do they assume people would work a 5 day week and hence count people who are not as having lost these days whether they have a job or not.

Its also not clear to me what different benefits mean. Is ESA (employment support alowance) counted as a disability benefit or is it an income benefit.There can be a significan lag here due to the need to appeal the original assessments. Do they count tax credits as income benefits due to people working part time this then becomes complicated by the overall family income and number of children.

I assume these figures are all prior to the resession hence we can discount the state of the job market.

I'm surprised by how little definition there is for the various numbers they give. I guess I should probably go back to the trial protocol and see what questions are being asked.
Firstly, when you say Table 4 you mean Table 2.

It would be good to be sure if they mean days lost out of a 5 day week.
I'm leaning towards not thinking this as they do have baseline data:
What was your employment status immediately before your illness started?
[..]
How many hours per week did you work that time (if any)?

They were also asked:

15 a) If yes: how many days in the last 6 months have you had off work/study because of your fatigue?

Days

OR

15 b) How many fewer hours per week have you worked because of your fatigue?
Hours

However perhaps I missed something on this in the latest paper.

Earlier in the thread I posted the question they were asked on benefits. It's in the long protocol.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Are we agreed that benefits increased across the board though? (Table 4)

Yes, that seems to be the case.
There appear to be (absolute, but not relative) increases in every benefit category, for each therapy group.
Here's my summary of changes in cost totals, in case helpful:
http://forums.phoenixrising.me/inde...ss-of-the-pace-trial.18722/page-5#post-285208

So it is likely I was thinking of that and not employment. This was a memory problem - one of the reasons why I don't do real science any more, and why I am having to invent a whole new methodology to write my book. Alternatively I was making an inference that benefits imply work attendance, or made that inference in the past and then remembered that.

Yes, I seem to remember you making a comment about the benefits earlier in the thread, saying that they increased across the board, so maybe that's where you got it muddled.

You are in good company here, Alex... My memory is absolutely useless these days... And I do exactly the same as maybe you have... I remember something slightly incorrectly, then I make a calculation based on my false memory, and then I start writing about it, convinved that I've got the facts absolutely right, based on a false memory! Then at the end of it, I haven't got a clue how I could get something so wrong, and think I was repeating facts! I've done that a number of times on the forum.

And this particular paper is heavy in complex details to get our brains around... I recommend downloading @Simon's excell file for a handy reference (see an earlier post in the thread)... I've been using it repeatedly. (Thanks again Simon, for that.) (Note that Simon's totals are unadjusted, but they give the correct indication of the nature of the changes.)

I also can't write very long pieces of text, because I can never remember what I've written, so I have to repeatedly re-read each section to find out what I've written, before I can carry on. But then I can't remember anything after I've re-read it all anyway. It makes it almost impossible to write long complex pieces. Maybe it might help if I were to methodically summarise each section as I write. I hadn't thought of doing that. I'd be interested to hear if you've found a way to deal with this sort of thing, for writing your book, Alex.

How come benefits received went up, and work attendance did not change for CBT? Has anyone figured that out? It could be important. Could it be because, as I have suggested before, they simply had more benefits in the pipeline and it took longer to receive them?

It's a good question about the increased benefits vs decreased lost employment.
For CBT, the proportion of participants losing days of work remained the same, but there were savings for lost employment costs (so work attendance increased) across the board, and benefits also increased across the board.
Maybe you are right about the benefits being in the pipeline for some participants. Also, it is possible to claim ESA and work part time indefinitely, or full time for a year. And maybe a few participants were able to able to increase their hours substantially.

However, I went back and looked at employment again. There was an increase in percentage values for everything except CBT. Absolute numbers cannot be used due to the numbers of dropouts. An important question not being asked is what happened with dropouts. It could be reasonably expected they might have even more days off work, depending on reasons for dropping out. Some of course will have dropped out for reasons that have nothing to do with the trial.

But have a look at the math. The pre-randomization percentage for CBT days lost is 83.85%. The post-randomization is 84.13%. There is an increase it just disappears after rounding.

One question is: am I right am presuming the decline in numbers was due to dropping out? Was it instead due to poor records or some other reason? Because that might lead to other questions.

Yes, the number of drop-outs does seem like a significant issue. I think the paper acknowledges this somewhere. I think it's esp important because many of the costs are quite close between groups. Unfortunately, without FOI requests, I can't see us every making this an issue that sticks.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
The bit in table 4 I didn't get is the second set of figures for days of work lost. I'm assuming that the first set is the number of people who have lost days of work through ilness and the second set is the mean and std of days lost my problem is this seems very high.

As I see it the average days lost (per month, the first bit of the table is over 6 months and the second b) has reduced giving figures

APT 13.5 After 12.3
CBT 14.2 After 12.6
GET 13.8 After 12
SMC 12.6 After 11.8

Strangely for these columns they are using the original number of patients rather than the reduced numbers that they seem to have data with after the trial.

I was wondering have they mislabeled these values as days when they are hours or do they assume people would work a 5 day week and hence count people who are not as having lost these days whether they have a job or not.

I think I agree with your first interpretation.

In the first column of Table 2, I think it shows the number of patients who had lost days of employment.
In the second column, I think it shows the number of days lost per participant per year.
So for example, in the APT group (post-randomisation), the number of days of lost employment per year is 148 days.
I don't think this seems too high, if we assume that many patients will have dropped out of work altogether.



Its also not clear to me what different benefits mean. Is ESA (employment support alowance) counted as a disability benefit or is it an income benefit.There can be a significan lag here due to the need to appeal the original assessments. Do they count tax credits as income benefits due to people working part time this then becomes complicated by the overall family income and number of children.

I imagine that ESA is an illness-related beneift, and income-support is an income-related benefit.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
I earlier caught myself applying the data gleaned from this paper and PACE itself as it happens and using it in a general real-life application.

I don't think this is particularly useful really. I mean as with all published papers they are relevant only to the study in question, right? I mean I have in past protested at things like I almost did being quoted in newspapers and what not.

It's what we object to most isn't it when such a thing makes a headline. This is only factual in relation to a paper and only then if they don't generalise or get their facts wrong. Ah well. Am relieved I qualified what I was saying anyway. Sorry. A bit irrelevant and OT.
 

Dolphin

Senior Member
Messages
17,567
I earlier caught myself applying the data gleaned from this paper and PACE itself as it happens and using it in a general real-life application.

I don't think this is particularly useful really. I mean as with all published papers they are relevant only to the study in question, right? I mean I have in past protested at things like I almost did being quoted in newspapers and what not.

It's what we object to most isn't it when such a thing makes a headline. This is only factual in relation to a paper and only then if they don't generalise or get their facts wrong. Ah well. Am relieved I qualified what I was saying anyway. Sorry. A bit irrelevant and OT.
There are two concepts called internal validity and then external validity or generalisability that you are bringing up. If a study is valid in terms of the group of individuals it had, even if these are a very unrepresentative group of the whole population, one can say it has internal validity.

External validity or generalisability relates to then seeing whether one can apply the data to the wider population.