• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A cost effectiveness of the PACE trial

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Thanks, Bob - but not my eagle eyes!

I don't want to labour this point because it may be of no importance (and my calculations could well be wrong), but looking at the figures in the APT column, the (small) n of 141 gives the wrong percentage for all the (absolute) N provided; for the other columns it's possible to find one, and only one, percentage (p) such that N/n*100=p, but it isn't clear to me what the (small) n denotes in this context.


Edit: I've got this wrong - please see biophile's post, below, for a full explanation of this issue


Please labour away Sam. I'm keen to understand this paper.

I can't quite follow your post though, Sam, unless you didn't quite follow my last post!

The (small) n figure of 141, for APT, denotes how many participants that they managed to get follow-up data on, for APT.
But at the end of the original PACE Trial, they had 159 participants in the APT group. (See the numbers at the top of Table 1 in this paper, for the original numbers in each therapy group, in the PACE Trial paper.)

In Table 4, the first figures in the APT column are: 28 (18).
If we use the numbers of participants for the original PACE Trial paper, then is all adds up:
18% of 159 is 28.62.
(159 being the original number in the APT group in the PACE Trial paper.)
So I think they have just made the mistake of using the number of participants from the original PACE Trial paper to calculate the percentages.

This works out for all the other figures in Table 4 that I've calculated (i haven't tested them all.)

So I think they've just made this simple error (that just happens to make their percentages look more favourable.)

It seems almost impossible to discuss this paper, because it's so complex, so if that doesn't make sense, then please ask again Sam.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Thanks User & Bob
Based on our discussions here, my best/final guess is the Fig 1 & 2 were constructed as follows:
  • Net benefit (QALY value x change from baseline - cost) is calculated for each patient based on changes from baseline
  • 1,000 resamples are created in the bootstrapping process, and for each resample regression analysis is used to compute which therapy is best (giving the percentage liklihood of each therapy being best at each QALY).
Bob, your figures are calculated using average data for CBT, GET etc groups, rather than the calculation per individual (vs baseline not SMC) I think is used in Fig 1 & 2. But of course results based on the averages should give broadly similar results to results based on individuals which is, I think, why your calculations give similar answers to theirs. i.e. they have done the calculations right, and so have you.

Thanks very much for helping with that, Simon.
It's slowly beginning to make sense for me.
Do you know if the paper actually says that Figure 1 is based on changes from baseline?
 

Sam Carter

Guest
Messages
435
Please labour away Sam. I'm keen to fully understand this paper.

I can't quite follow your post though, Sam, unless you didn't quite follow my last post! (There seems to be confusion all round for this paper. Well, there is on my part, anyway!)

The (small) n figure of 141, for APT, denotes how many participants that they managed to get follow-up data on, for APT.
But at the end of the original PACE Trial, they had 159 participants in the APT group (see the numbers at the top of Table 1 in this paper, for the original numbers in each therapy group, in the PACE Trial paper.)

In Table 4, the first figures in the APT column are: 28 (18).
18% of 159 is 28.62. (159 being the original number in the APT group in the PACE Trial paper.)
So I think they have made the mistake of using the wrong number of participants to calculate the percentages (They used the full number originally in the APT group).

This works out for all the other figures in Table 4 that I've calculated (i haven't tested them all.)

So I think they've just made this simple error that just happens to make their percentages look more favourable.

It's almost impossible to discuss this paper, because it's so complex, so if that doesn't make sense, then please ask again Sam.

LOL Bob, my post was about as clear as mud! (And thanks for pointing out that the baseline percentages were calculated using a different n -- shudda been obvious but..)

What I was trying to say is that I think there is a small error in the APT data:

APT, 12-month post-randomisation period, n=141:

Income benefits N=33 p=22 -> between 147 and 153 people answered this question
Illness/disability benefits N=57 p=38 -> between 149 and 152 people answered this question
Payments from income protection schemes or private pensions N=12 p=8 -> between 142 and 159 people answered this question

This means either n>=142, or more likely, if n=141 as stated, there's a rounding error in the last row: 12/141*100 = 8.51 which has been rounded down to 8 when it should have been rounded up to 9.

It's not a big deal unless the authors adopted a general policy of rounding down inappropriately.

When I said "it isn't clear to me what the (small) n denotes in this context" I meant that I didn't know which of the various ns (ie. the number of people who answered a particular question) they had chosen to display at the top of each column, but it looks like they took the smallest one.
 

biophile

Places I'd rather be.
Messages
8,977
Seventh, we analysed data only for those participants where we had data at both baseline and follow-up. This may have introduced some distortions to the results but there were few differences between patients with missing data and those on whom we had complete data.

How come the n values in the 12-month post-randomisation section of Table 2 are different between [N(%) using services] vs [Mean (sd) contacts per user]? There is also a typo in Table 2, in the 6-month pre-randomisation period, the [Medication] category should have a (c) instead of a (b) next to it.
 

Dolphin

Senior Member
Messages
17,567
I think it might be worthwhile if people pointed out errors, or what look like they might be errors, in the comments' section of the site. It might lead to:
(i) people questioning the rigour of the peer-review;
(ii) people questioning the reliability of all the data;
and/or
(iii) simply getting the correct information from the authors giving a response. Alternatively, I don't think it looks that good if the authors don't respond.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Hi everyone,

It seems that I have posted quite a lot of rambling rubbish in this thread, whilst I was getting to understand this paper.
So I'd like to thank everyone for being so patient with me, and for helping me to understand it.
Everyone has been great.

I got really annoyed and frustrated when this paper was published and, because of my frustration, I went at it like a bull in a china shop, but that meant that some of my posts were ill-informed and unhelpful, so maybe I should have taken more time before I started posting my premature analyses.

I've now gone back and corrected or deleted all of my posts that had errors in them, that were not part of the discussion. (But I've not significantly changed or deleted any posts that people have responded to.)

Although I have said this repeatedly, always prematurely, I think I might have finally got my head around the bulk of this paper now! (A lot of the stuff that I thought I understood, previously, was incorrect, as many of you already knew.)

Thanks again, for everyone's patience,
Bob


BTW, I've updated an earlier post which outlines all the differences in overall costs and totals.
I find it quite a handy reference, so here it is if anyone else wants to refer to it:
http://forums.phoenixrising.me/inde...ss-of-the-pace-trial.18722/page-5#post-285208
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
LOL Bob, my post was about as clear as mud! (And thanks for pointing out that the baseline percentages were calculated using a different n -- shudda been obvious but..)

What I was trying to say is that I think there is a small error in the APT data:

APT, 12-month post-randomisation period, n=141:

Income benefits N=33 p=22 -> between 147 and 153 people answered this question
Illness/disability benefits N=57 p=38 -> between 149 and 152 people answered this question
Payments from income protection schemes or private pensions N=12 p=8 -> between 142 and 159 people answered this question

This means either n>=142, or more likely, if n=141 as stated, there's a rounding error in the last row: 12/141*100 = 8.51 which has been rounded down to 8 when it should have been rounded up to 9.

It's not a big deal unless the authors adopted a general policy of rounding down inappropriately.

When I said "it isn't clear to me what the (small) n denotes in this context" I meant that I didn't know which of the various ns (ie. the number of people who answered a particular question) they had chosen to display at the top of each column, but it looks like they took the smallest one.


Edit: Please see biophile's post, below, for a full explanation of this issue.


Hi Sam and everyone else,

Sorry about this, but it looks like I've got it wrong again. So please ignore my previous posts about this.

I didn't check all the figures in Table 4: I only checked the top row, which I thought worked with the 'n' from Table 1, as follows:

'n' from Table 1:
APT n= 159
CBT n= 161
GET n= 160
SMC n= 160

Table 4:
APT 28 (18): 18% of 159 = 28.62 (I assumed it was incorrectly rounded down = 28)
CBT 16 (10): 10% of 161 = 16.1 (rounded down to 16)
GET 22 (14): 14% of 160 = 22.4 (rounded down to 22)
SMC 17 (11): 11% of 160 = 17.6 (I assumed it was incorrectly rounded down to 17)

So these figures match the 'n' used for Table 1., if we assume that they have been incorrectly rounded down, instead of rounded up.

However once you start on the second row using the 'n' from Table 1, then it all goes a bit random:

APT 33 (22): 22% of 159 = 34.98 (rounded up to 35)
CBT 19 (13): 13% of 161 = 20.93 (rounded up to 21)

And looking at the second row again, but using the 'n' from in Table 4, it still doesn't add up, as Sam pointed out:
APT 33 (22): 22% of 141 = 31.02
CBT 19 (13): 13% of 138 = 17.94


So, I can't work out what they've done here.
It seems like a simple enough calculation to work out what percentage of the group claimed benefits.


Edit 1:
I've now used yet another set of 'n' from Tables 2 & 3 (post randomisation), to look at the second row of Table 4, and it gets it a bit more accurate, but it still doesn't quite work, as follows:

'n' from Tables 2 & 3 (post randomisation):
APT n=146
CBT n=145
GET n=140
SMC n=148

Table 4:
APT 33 (22): 22% of 146 = 32.12
CBT 19 (13): 13% of 145 = 18.85
GET 29 (20): 20% of 140 = 28
SMC 20 (14): 14% of 148 = 20.72


Edit 2:
I've worked out what 'n' should be for APT, in Table 4, based on each set of numbers in the APT column, in Table 4. It still doesn't make any sense:
28 (18) n = 155.6
33 (22) n = 150
42 (26) n = 161.6
57 (38) n = 150
10 (6) n = 166.7
12 (8) n = 150
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
So, here goes for yet another summary, based on my current understanding.
I'm sure that I'm going to regret posting this, as I'll probably find out it's a load of rubbish later.
So please take anything from this that might be useful, but don't rely on it for accuracy.

This is an attempt to explain the basis of the paper, and the results.


First, it's important to note that Healthcare costs include the costs of administering the therapies (APT,CBT,GET,SMC) along with other misc healthcare costs, and that Societal Costs include the Healthcare costs.

So the Societal Costs indicate the overall total costs or savings, consisting of Healthcare costs (including paying for administering the therapies), lost Employment/Production costs, and informal care costs. (The itemised costs, for each of these overall costs, are listed in Table 3).

So, if there is an overall saving for societal costs, it means the savings include the costs of administering the therapies.

CBT+SMC and GET+SMC do not have significantly lower overall (societal) costs compared with SMC alone, but they are lower, according to the adjusted Totals in Table 3. It's important to note that the primary results of the study were not based on the overall, or total, costs and savings. The primary results are based on costs 'per individual improved', or costs per the number of QALYs gained per individual. A QALY is simply a measure of quality of life, based on the answers to a short questionnaire.

Table 3 shows all the costs before and after treatment started (pre-randomisation, and post-randomisation.)

Table 6

Table 6, compares the changes in the costs shown in Table 3 (the changes in the periods before and after treatment started). Table 6 compares the changes between the various therapy groups (e.g. CBT+SMC vs SMC.)

Table 6 is the best illustration of the results, although it seems that Table 6 is based on average costs and effects per person, whereas the main results have been calculated sightly differently, using a method called 'bootstrapping' which uses all the individual costs and effects of each person (not the averages) to calculate the primary results of the paper.


Table 6 shows the 'incremental' changes for each therapy. This means that the changes over and above the changes for SMC are shown. (i.e. the difference between CBT+SMC vs SMC alone.)

Table 6 is split into three categories: 1. QALYs. 2. Fatigue (Chalder Fatigue.) 3. Disability (SF-36 Physical Function.)

Looking at each of the rows in the QALY section:

"Incremental effect" indicates how many QALYs were gained per individual as a result of each therapy. (i.e. incremental QALYs gains as a result of CBT and GET. Not the QALYs gained for CBT+SMC.) (Remember that a QALY is a subjective measure of quality of life, based on a questionnaire.) The gains in QALYs were based on measurements taken at baseline and at 52 weeks.

The "incremental healthcare cost" shows the mean incremental healthcare costs per person, for each therapy. (Incremental = costs over and above costs for SMC. i.e. CBT+SMC vs SMC)

"ICER (Healthcare)" shows the mean healthcare cost per QALY gained for each individual. This is the healthcare cost, for each therapy, in order for each individual to gain a QALY, including costs to administer the therapies in order to gain QALYs.

"Incremental Societal cost" shows the mean incremental (overall) societal costs/savings (a negative value is a saving), per individual. Remember that societal costs include healthcare costs, so if societal costs have a negative value, then there is a net overall saving per individual, even taking into account costs for administering the therapies. The (statistically insignificant) societal cost savings seen for CBT and GET are based on improved lost employment costs, and improved informal care costs.

"ICER (societal)" indicates whether CBT+SMC etc., have overall costs or savings, compared with SMC alone, per QALY, per individual. The use of the term "dominant" indicates that there are cost savings for CBT+SMC, compared with SMC alone, and for GET+SMC, compared to SMC alone. (SMC is the control group, and Table 6 indicates the 'difference from SMC', or the incremental costs/savings for each of the therapies, once the changes in the SMC control group have been factored out of each of the therapy groups.) So the results given under "ICER (societal)" indicate that, compared to SMC alone, CBT+SMC and GET+SMC make savings for society for each QALY gained per individual. (And maybe it could be said that, compared to 'no treatment', CBT and GET make savings for society for each QALY gained per individual. But I don't think that it is appropriate to say this.)


In the QALY section, ICER (healthcare) and ICER (societal) illustrate the basis of the primary results of this paper.
(The actual primary results are calculated slightly differently to Table 6, as explained earlier.) The other two sections in Table 6 (fatigue and disability) are said to support the findings of the QALY section.

ICER (healthcare) shows the healthcare cost per QALY gained (i.e. costs to administer the therapies plus various other healthcare costs). The paper says that the NHS values a QALY at £30,000. This means that the improvements in quality of life, measured in QALYs, have been costed at £30,000 per QALY. (I don't know how the NHS calculate this.) So if one QALY can be gained at a healthcare cost of less than £30,000 then apparently the NHS consider it worth administering the therapy.

ICER (healthcare) indicates that CBT and GET gained QALYs at a cost of less than £30,000 each, hence the main conclusion of the paper that CBT and GET are cost effective.

This is the main basis of the conclusions of the paper, but Figures 1 & 2 demonstrate the primary results which are calculated slightly differently from Table 6.


If should be noted that where the paper says that CBT and GET are more cost effective than SMC, they are being misleading and inaccurate. SMC is the control group, and the paper does not, and cannot, make a judgement on the cost effectiveness of SMC, because a control group is used to factor out any natural fluctuations over time etc.
If a direct comparison is made between CBT only and SMC alone, and GET only and SMC alone, then SMC is always more cost effective.
Parts of the paper are worded and labelled in a confusing and misleading way.
Another way to demonstrate this is that, instead of making comparisons between SMC+CBT and SMC, etc, I think it might be less confusing to think about CBT and GET being more cost effective than 'no treatment', because the paper looks at the relative changes comparing CBT+SMC vs SMC, and GET+SMC vs SMC. But I don't think that it is actually appropriate to say that there is a comparison between each of the therapies and 'no treatment'.

There's not a lot of clarity in the cost-effectiveness paper, but it does use appropriate wording in at least one section:
"The resultant ICER indicates the cost of one extra person achieving such a change as a result of using APT, CBT or GET in addition to SMC compared to SMC alone."
In other words: "CBT and SMC used in combination, compared to SMC alone."


ICER (societal) indicates that societal (overall) cost savings are seen for CBT+SMC and GET+SMC, compared with SMC alone, per QALY gained per individual. (i.e. compared to 'no treatment', CBT and GET make overall cost savings for society for each QALY gained per individual.)
The incremental mean societal savings per individual (not per QALY per individual), for CBT and GET, were not statistically significant, and I'm not sure if it is clear if the savings per QALY, per individual, are significant, from the details given in the paper. I think that the actual ICER societal savings per QALY per individual are not given, but my rough calculations make it: CBT = -£9,431, GET = -£5,743. (These are the savings for CBT+SMC and GET+SMC compared to SMC alone.)


The fatigue and disability sections in Table 6, show the overall costs and savings per person improved, based on the primary outcomes of the original PACE Trial paper.
They are said to support the primary results illustrated in the QALY section of Table 6.


Details for changes in welfare benefits and private financial payments (income protection insurance and private pensions) are given, but are excluded from the calculations for the main results. It's not clear why.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
The mean relative societal savings for CBT+SMC vs SMC, and GET+SMC vs SMC, are not 'significant'.
It's interesting to have a think about the societal costs 'per individual improved' for fatigue and disability.
Once the societal cost are calculated 'per individual improved', the differences appear to increase massively.
But I don't know if the societal costs 'per individual improved' show a 'significant' difference from SMC.
(This might be in the paper somewhere - I'll have a look tomorrow.)
 

biophile

Places I'd rather be.
Messages
8,977
McCrone et al probably used the lowest group n value out of the three payment types

Hi Bob. When looking at the first row of Table 4, income benefits for the 6-month pre-randomisation period, the payment n values are clearly not consistent with the group n values given at the top. However, they are entirely consistent with baseline group n values in Table 1. Calculating from % is less accurate than calculating from n, because the % is pre-rounded to a whole number whereas n is the actual value. This is one of the reasons you are seeing inconsistencies (the other reason being that apparent inconsistencies actually exist but do have a reasonable explanation).

For example, you wrote that APT n=28 (18%): 18% of 159 = 28.62 ("incorrectly rounded down to 28"). However, it is more accurate to calculate APT n=28/159 which gives 17.61% and is entirely consistent with the 18% rounded value given. Same goes for CBT, GET, and SMC. Another example is your calculation where you give the estimated group n value based on payment N x [1/%/100]. As you wrote, this does give n=155.6 for APT in the first row of Table 4, but this is false precision because it is based on a pre-rounded whole number. I think Sam Carter was on the right track by also calculating the range of possible group n values, in this case for APT on the first row it is 152 to 160 as these are the only group n values which can be rounded to 18% when used to divide n=28 for those on income benefits during the 6-month pre-randomisation period. n=159 obviously fits into the 152 to 160 range.

Do not bother with Table 2 / Table 3 for group n values for 12-month post-randomisation periods to use on Table 4, as these are different outcomes altogether and the participant data availability may differ.

I crunched a few more numbers. It is highly likely that the 6-month pre-randomisation period group n values for ALL three payment types in Table 4 are the same as Table 1 (APT=159, CBT=161, CBT=160, GET=160), because they are entirely consistent with the % given. However, as others including yourself have already noted, there is a lack of consistency between the n values receiving different payments vs the group n values given for the 12-month pre-randomisation period.

So I compared the percentages given in the paper vs those I calculated from given n values, and then determined the possible range of group n values based on payment n and given %:
Income benefits, 12-month post-randomisation period:
APT=33 (22%) or 23.40% if n=141, range = 147-153 if rounded to 22%
CBT=19 (13%) or 13.77% if n=138, range =141-151/152 if rounded to 13%
GET=29 (20%) or 21.64% if n=134, range =142-148 if rounded to 20%
SMC=20 (14%) or 13.99% if n=143, range =138-148 if rounded to 14%

Illness/disability benefits, 12-month post-randomisation period:
APT=57 (38%) or 40.43% if n=141, range = 149-151/152 if rounded to 38%
CBT=56 (38%) or 40.58% if n=138, range = 146-149 if rounded to 38%
GET=52 (36%) or 38.81% if n=134, range = 143-146 if rounded to 36%
SMC=58 (39%) or 40.56% if n=143, range = 147-150 if rounded to 39%

Income protection schemes or private pensions, 12-month post-randomisation period:
APT=12 (8%) or 8.51% if n=141, range = 142-159/160 (actual upper-limit is 159) if rounded to 8%
CBT=17 (12%) or 12.32% if n=138, range = 136/137-147 if rounded to 12%
GET=22 (16%) or 16.42% if n=134, range = 134-141 if rounded to 16%
SMC=11 (7%) or 7.69% if n=143, range = 147-169 (actual upper-limit is 160) if rounded to 7%

Explanation? The authors stated that "we analysed data only for those participants where we had data at both baseline and follow-up". Sam Carter mentioned that the authors may have used the lowest group n value out of the three types of payments. I think that is what happened when looking at the calculations I did. However, this suggests that there is a slight error in the McCrone et al paper for APT participants receiving payments from income protection schemes or private pensions during the 12-month post-randomisation period. This outcome has both the lowest group n value for APT which also borders on the lowest possible range. Either it should be 9% not 8%, or, n is not 141 but 142-159 (probably 142).

Hope that helps, assuming I'm correct (fairly certain). Simon mentioned the possibility of overlap between participants receiving difference types of payments. Not sure how much that would have affected the situation if relevant.

PS - There is no way I'm repeating that colossal unrewarding effort for Table 2, despite help from an XLS document!
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
biophile, it seems that you've nailed it! Thanks for doing that.

Sam Carter, it seems that I didn't follow your reasoning properly.

So it looks like Sam and biophile got there in the end. Possibly a small mistake for APT: the last percentage figure given in the column, or 'n' for APT should be 142.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Large reductions in general health costs: real improvement or just displacement?
Comparing healthcare costs excluding the PACE therapies (eg primary care, secondary care, Accident & Emergency) between pre-randomisation and post-randomisation shows substantial cost reduction across all arms of the trils - see table below. These cost reductions completely cover the cost of SMC and offset a substantial proportion of the other therapy costs. But are they genuine gains?

I can't remember where I read this, but at least some research has shown that non-trial healthcare use consistently falls in clnical trials of all sorts of therapy for all sorts of illnesses. The presumption is that if you are receiving regular therapy you are less likely to turn to other treatments eg complementary therapies, or further investigations of your condition. In the case of PACE, the SMC was specifically dealing with CFS symptoms through medication, displacing the need for some GP visits at least. I'm pretty sure the PACE trial also required participants not to start any other CFS treatment while in the Trial.

Fullscreen capture 12082012 122553.jpg


The authors have self-report in great detail on what type of healthcare professional participants were seeing (eg dentist, physio, neurologist, or MRI scans) so might be able to shed more light on this. But as things stand it's hard to know how much of the savings in healthcare are real (rather than displacement) and would be continued in future years.
 

Dolphin

Senior Member
Messages
17,567
Large reductions in general health costs: real improvement or just displacement?
Comparing healthcare costs excluding the PACE therapies (eg primary care, secondary care, Accident & Emergency) between pre-randomisation and post-randomisation shows substantial cost reduction across all arms of the trils - see table below. These cost reductions completely cover the cost of SMC and offset a substantial proportion of the other therapy costs. But are they genuine gains?

I can't remember where I read this, but at least some research has shown that non-trial healthcare use consistently falls in clnical trials of all sorts of therapy for all sorts of illnesses. The presumption is that if you are receiving regular therapy you are less likely to turn to other treatments eg complementary therapies, or further investigations of your condition. In the case of PACE, the SMC was specifically dealing with CFS symptoms through medication, displacing the need for some GP visits at least. I'm pretty sure the PACE trial also required participants not to start any other CFS treatment while in the Trial.

View attachment 3669

The authors have self-report in great detail on what type of healthcare professional participants were seeing (eg dentist, physio, neurologist, or MRI scans) so might be able to shed more light on this. But as things stand it's hard to know how much of the savings in healthcare are real (rather than displacement) and would be continued in future years.
Yes, good point(s). I know quite a lot of people with ME/CFS have a tendency to like to be trying something new therapy-wise reasonably regularly. So I could well imagine many of the participants would then start looking for other therapies once the trial was over. And if they had specifically been told not to try therapies (I think this was probably the case), then this would heighten the effect.

Also, people with ME/CFS generally have limited energy budgets so might feel they don't have the time/energy to attend a CAM therapy (say) regularly on top of participation in the trial. And some people might be like me and slightly neglect some aspects of their health e.g. the frequency of going to the dentist for checkups, cleaning, etc. if feeling over-busy.
 

Enid

Senior Member
Messages
3,309
Location
UK
But the only help that was of any use to me was SMC - CBT & GET deemed inappropriate at any stage based as they are on the assumption that there is no underlying pathology and symptoms maintained by abnormal illness beliefs. My Docs luckily knew better.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
The Cost of CFS: Modest effect of CBT versus full recovery
The graph below shows the Healthcare and Societal costs of CFS before and after CBT (average effect) - and for comparison:
  • 'Recovered'; ie costs associated with a patient if they recover completely as a result of therapy
  • The Recovery column shows the costs as they would have appeared in the PACE trial during the year of therapy. Even a curative therapy would take time to work, so the full benefits of a Recovered patient won't show in the first year - which is the year that PACE measured.
As you will see, the average gains from CBT are rather small compared with the gains for full recovery - and the majoirty of the CBT/SMC cost gains shown are due to SMC (or control group effects) rather than CBT itself.

I hope this makes sense. Please let me know if anything needs clarification
perspective.jpg.jpg


Assumptions
I made several assumptions to calculate the cost savings associated with recovery. The ones with the most impact are:
  • Lost employment: lost days for recovered = UK average lost days per worker (adjusted for age and sex)
  • Informal care: no informal care required by recovered patients
The situation with other healthcare costs eg neurologists is more complicated as I don't have norms for the general populations use of such specialists so I made generally conservative assumptions here. However, healthcare costs are a relatively small part of the total picture.

I can provide more detail on assumptions is anyone in interested. And this is very much a Draft, I can do more if there is any enthusiasm for it.
 

Attachments

  • perspective.jpg.jpg
    perspective.jpg.jpg
    142.5 KB · Views: 6

Dolphin

Senior Member
Messages
17,567
The Cost of CFS: Modest effect of CBT versus full recovery
The graph below shows the Healthcare and Societal costs of CFS before and after CBT (average effect) - and for comparison:
  • 'Recovered'; ie costs associated with a patient if they recover completely as a result of therapy
  • The Recovery column shows the costs as they would have appeared in the PACE trial during the year of therapy. Even a curative therapy would take time to work, so the full benefits of a Recovered patient won't show in the first year - which is the year that PACE measured.
As you will see, the average gains from CBT are rather small compared with the gains for full recovery - and the majoirty of the CBT/SMC cost gains shown are due to SMC (or control group effects) rather than CBT itself.


I hope this makes sense. Please let me know if anything needs clarification
View attachment 3672

Assumptions
I made several assumptions to calculate the cost savings associated with recovery. The ones with the most impact are:
  • Lost employment: lost days for recovered = UK average lost days per worker (adjusted for age and sex)
  • Informal care: no informal care required by recovered patients
The situation with other healthcare costs eg neurologists is more complicated as I don't have norms for the general populations use of such specialists so I made generally conservative assumptions here. However, healthcare costs are a relatively small part of the total picture.


I can provide more detail on assumptions is anyone in interested. And this is very much a Draft, I can do more if there is any enthusiasm for it.
Thanks.

I'd be inclined to think if the results were a lot better for the second 6 months, we would have heard about it i.e. the authors would have told us.

Of course, your figures show there is likely to be a limit to how much better they could have been (i.e. if they were better for the second six months in comparison to the first six months, they could not have been that much better given the total figures for the 12 months, presuming there was not an extremely dramatic worsening in the first six months). The authors could either have included extra data in an appendix or simply mentioned the point in a sentence or two if there was even some sort of reasonable reduction for the second six months.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
I'd be inclined to think if the results were a lot better for the second 6 months, we would have heard about it i.e. the authors would have told us.

Of course, your figures show there is likely to be a limit to have much better they could have been (i.e. if they were a lot better for the second six months in comparison to the first six months, they could not have been that much better given the total figures for the 12 months, presuming there was not an extremely dramatic worsening in the first six months). The authors could either have included extra data in an appendix or simply mentioned the point in a sentence or two if there was even some sort of reasonable reduction for the second six months.
Yes, that struck me when I first read the paper: why not discount the first 6 months when therapy is ongoing. The Trial data clearly shows that fatigue and (self-reported) physical function were significantly higher in the second 6 months than the first 6 months. Also, any employment effect is likely to take a while to show, even for those in part-time work increasing their hours. It's a bit of a mystery to me.