• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A cost effectiveness of the PACE trial

user9876

Senior Member
Messages
4,556
I think this paper may be important in looking at the eq-5d results. It will take me a while to read it and a while before I can find the time so I thought I would post it.

Basically its saying that statistical significance of comaprisons are very dependent on the weights used to compute the single utility value. So a result could be significant in the UK but not in the netherlands

the problem of using an index created from weighted profile data is that any statistical
analysis is affected by information that is not inherent to the sample data; variations in the
index reflect variations not only in the sample but also in the weights. Statistical tests of
significance also introduce complexities into the weighting system via the variance, in
particular interactions between weights, levels and dimensions that are not in the original
weighting structure. This may have the uncomfortable implication that conventional
significance tests are inappropriate and give misleading levels of significance. This is most
obviously an issue where the index is intended as a convenient summary of descriptive data,
but the problem will also apply where it is intended as a value or utility, unless the underlying
weights can be regarded as fixed. If they are regarded as variable, this casts some doubt on
the results from very many published cost-effectiveness studies.

Just to warn people its quite mathematical

http://openaccess.city.ac.uk/1503/1/0810_parkin-et-al.pdf
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I'm trying to summarise the most interesting data published in the PACE Trial's cost analysis paper.
Before I try to make my summary more succinct, I'm posting my analysis so far.
If anyone is able and willing to, I'd be very grateful for any feedback, please.
(i.e. Have I made any obvious and glaring errors?)
I'll have to study the whole paper again, to make sure I haven't made any mistakes, and I'm obviously not expecting anyone else to give that sort of detailed feedback.


Cost Effectiveness Analysis paper:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0040808


Please note that I am ignoring APT in this analysis, so where-ever I say that there were 'no significant differences', etc., between treatment groups, this might not always apply when comparing the therapy groups to APT.



Lost employment hours

They've given us lost employment 'hours', and lost employment 'costs', but not details of number of individuals back to work, or the number who increased working hours.


Lost employment 'days' are given in Table 2, and lost employment 'costs' are given in Table 3.

The paper says that CBT and GET did not improve employment prospects:
"There was no clear difference between treatments in terms of lost employment."

Note that there were lost employment improvements in the CBT, GET and SMC groups, but the differences between CBT/GET and SMC were not significant, so CBT/GET did not improve outcomes.





Table 4: Welfare Benefits and Other Financial Payments

http://www.plosone.org/article/info...RI=info:doi/10.1371/journal.pone.0040808.t004

Note that these figures (for benefits) are not included in the cost effectiveness evaluations.
So although this data is published, it isn't used for any analysis.
I think that the 'benefits' data is the only 'cost' data in the paper that is excluded from the final analysis.

For the overall differences, for all (welfare and private) 'benefits', taken as a whole, the paper seems to assert that there was no significant difference between the CBT/GET groups and the SMC control group, although it's not very clear what exactly they mean by 'benefits' in the following text (I think they are lumping all private and welfare benefits together.):
"However, with the exception of a difference between CBT and APT, there were no significant differences in either lost work time or benefits between the treatments during follow up. In fact, benefits increased across all four treatments."


Note that they say that "benefits increased across all four treatments." (So, overall benefits increased after treatment with GET and CBT, as well as with SMC.)
So, for overall benefits claims (all welfare and private benefits, lumped together), there was an absolute increase in the proportion of participants making claims, in each of the therapy groups.

For both 'income-related benefits' and 'income protection schemes or private pensions', the increases in claimants for CBT/GET are higher (worse) than for SMC, but they just say that the differences were not 'substantial'. They don't say that the differences are not significant, so the outcomes for CBT and GET might be significantly worse, when compared with the SMC control group, in both of these benefit categories.

Interestingly, there is no data specifically in relation to 'private medical insurance' claims. They only publish data for income protection schemes, and private pensions. I don't know if they collected data for private medical insurance. If they did, then perhaps the data wasn't to their liking, because they didn't include it.



Here is a breakdown of the individual types of benefit claims (it includes private 'benefits'):


Income-Related Benefits:
The proportion of participants claiming Income-related benefits increased in every therapy group.
Looking at the unadjusted figures, there is little difference between the changes in each therapy group (CBT, GET, and SMC), so it looks like CBT and GET made no significant difference to income-related benefits.

The text says:
"Relatively few patients were in receipt of income-related benefits or payments from income protection schemes and differences between groups were not substantial."


Illness/disability benefits:
The proportion of participants claiming illness/disability benefits increased in each therapy group.
By my estimation, using the unadjusted figures, CBT & GET resulted in a relatively lower increase in numbers on illness/disability benefits, when compared with the SMC control group (i.e. CBT and GET resulted in a less bad outcome in relation to SMC, but there was still an absolute increase in the CBT and GET groups). By my estimation there was about a 12 or 13 percentage point less of an increase for CBT/GET than for SMC.

The paper doesn't comment on this. It just says:
"Receipt of benefits due to illness or disability increased slightly from baseline to follow-up (Table 4). Patients in the SMC group had the lowest level of receipt at baseline but the figures at followup were similar between groups."

They seem to be looking at absolute numbers claiming benefits in each group, rather than the relative changes in numbers claiming benefits in each group over time. So they completely fail to comment on the relative changes in illness/disability benefits. Maybe there's no statistical significance but they don't make that clear.

So for Illness/disability benefits, there were absolute increases for CBT and GET, but relative lower increases for CBT/GET than for SMC. The paper doesn't seem to comment about whether the differences between the changes in each groups are significant in this category, so I can't comment.





Income protection schemes or private pensions:
The proportion of participants claiming for income protection schemes or private pensions was higher in every therapy group.
And CBT and GET both resulted in relative increases in claims, compared with SMC, (but I don't know if they are statistically significant increases), in the private benefits category (payments from income protection schemes or private pensions.)
(Using the unadjusted figures, there was roughly a 4 to 6 percentage points increase in participants making claims in the CBT and GET groups, compared with SMC.)

Keeping in mind that at least one of the authors works for an insurance company, the paper avoids commenting on the increase in payments from income protection schemes and private pensions, as a result of CBT and GET:
"Relatively few patients were in receipt of income-related benefits or payments from income protection schemes anddifferences between groups were not substantial."
(Note, that they do not say that the differences were not 'significant', they just say 'not substantial'! Crafty!)








I'll try to make a succinct summary soon, but in the mean time, here's a temporary very-brief summary, which I think is safe enough to use, considering the lack of detailed analysis for each benefit category, in the published paper.
(My issue with the following summary is that I'm not sure if the differences between therapy groups are insignificant if we separate 'private payment claims' from 'welfare benefit claims' - there might only be insignificant differences between the therapy groups when all the 'benefit' categories are lumped together.)

Brief summary:

Considering the lack of detail in some of the cost-analysis paper's analysis, it seems safe to say that CBT and GET have not resulted in significant improvements in:
1. Employment hours,
2. Welfare benefit claims (consisting of income-related, and illness/disability benefits), or
3. Private payment claims. (consisting of payment protection insurance, and private pensions.)

CBT and GET actually resulted in worse outcomes (when using SMC as a control group) for private payment claims (private payment claims seem to consist of: payment protection plans, and private pensions.)
 
Messages
5,238
Location
Sofa, UK
One little detail of all that, that strikes me as mildly interesting just as it did when I first read it:

"Receipt of benefits due to illness or disability increased slightly from baseline to follow-up (Table 4). Patients in the SMC group had the lowest level of receipt at baseline but the figures at followup were similar between groups."

What seems odd to me about that sentence is that the various groups at baseline are supposed to be randomly allocated (I presume there's no relevant systematic difference in that allocation to the 4 treatment groups, at least not one that's acknowledged?) and therefore there's supposed to be no statistical significance to any difference between them at baseline.

Yet they note in their text that the SMC group had "the lowest level" at baseline, even though (if the allocation is valid) that difference should not be statistically significant. I guess the precise "lowest level" here is in table 4, so one can check the actual figures and see how much of a difference is required to justify them noting that it's "the lowest" - but it should be within the margins of random noise. And if so, then surely they are sloppy to turn that data into this text - if it's not significant in any way, then surely it's not a rigorous methodology that allows them to write this text? But this is how their raw data get translated into misleading sentences...

In this case, it might appear on casual reading of this sentence that SMC led to an increase in benefits to a greater extent than the other treatments. It would really be very easy to read this particular sentence and think there is some small degree of significance (of unstated size) to that 'finding'. Yet there's no indication of how big an effect that was, in this text, and if it was statistically significant in any way then that means the study design was flawed and the SMC group were significantly different in this respect, at baseline, from the other groups. Why, then, are they allowed to even write a sentence like this? In short, if it is not statistically significant then what is the value of noting it?

So I think it just highlights the looseness - and uselessness - of the way they use this kind of language - they seem to do it habitually, and frequently they really over-play it to the benefit of whatever case they want to make. In this particular case, it seems like a rare example where the 'spin' is of no particular benefit to any particular point they might want to make. So one can even imagine that this kind of translation of data into technically 'true' but statistically meaningless soundbites is so habitual in their everyday working practices that they don't even question whether it's valid or acceptable to write text like this - and maybe (a stretch, I know) they don't even notice when their personal prejudices come out in the words they choose, the conclusions they emphasise, and the sentences they choose to construct.

That's just another part of the reason why it's so important to just get the raw data, and then have a variety of academic statistical work assessing what valid conclusions can be drawn from that data. All this method of writing up data is giving us is the cherry-picking of soundbites based on the data, driven by the writers' prejudices. There's nothing scientific about that process as far as I can see.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Does anyone understand exactly how Table 2 should be interpreted re lost employment in relation to the number using services?

For the number participants "using services", does it indicate:

1. The number of participants for whom the lost employment data was relevant?
2. The number of participants who lost some (any amount of) days from work due to illness?
3. Some other interpretation, based on the "human capital approach" (the method used to calculate lost employment in this paper.)

Here's a link for Table 2:
http://www.plosone.org/article/info...RI=info:doi/10.1371/journal.pone.0040808.t002

Edit: I've changed these options, since posting.
 

Enid

Senior Member
Messages
3,309
Location
UK
Following it through Bob and totally bemused - like one day the bathroom, even a stroll down the road (if lucky). Any chance the PACE researchers seriously found answers.
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
I'm not going to read/re-read all 300 messages of this thread to see if this has been brought up before, but I was looking at this paper and I noticed the following:

They claim that APT and SMC has more informal care costs compared to CBT and GET. But look at what changed due to trial:

6 month pre-randomization period:

Patients utilizing informal care:
key: therapy (total in group): n (%) using services... mean (sd) contacts per user
APT (n=159): 118 (74%)... 11.5 (11.1)
CBT (n=161): 106 (66%)... 10.4 (8.3)
GET (n=160): 120 (75%)... 9.6 (9.3)
SMC (n=160): 128 (80%)... 12.3 (13.7)

aside: why do we have one extra patient in CBT and one less in APT anyway?

12-month post-randomization period:

key: therapy (total in group): n (%) using services... mean (sd) contacts (n)
APT (n=146): 108 (74%)... 11.0 (10.7) (n=159)
CBT (n=145): 96 (66%)... 8.0 (8.6) (n=161)
GET (n=140): 98 (70%)... 7.7 (8.7) (n=160)
SMC (n=148): 111 (75%)... 11.4 (11.6) (n=160)

the APT and SMC groups already had the highest usage of informal care.

the percentage of patients needing informal care - not counting dropouts - held steady in APT and CBT. It dropped exactly the same number of points in GET and SMC.

mean number of contacts per user - not counting dropouts - reduced across all groups, but slightly more in CBT and GET. it has already been noted that these patients were encouraged to stop seeking as much help from family, and do more activity potentially including housework, respectively.
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
ETA: Dolphin says this is not relevant it seems I misunderstood

also wondering why the contacts per patient was calculated by the original group sizes instead of by the group sizes at 12 months (or possibly, by the number of patients using that service?)

If I recalculate the mean using the group sizes at 12 months, the table looks like this:

pre:
APT (n=159): 118 (74%)... 11.5 (11.1)
CBT (n=161): 106 (66%)... 10.4 (8.3)
GET (n=160): 120 (75%)... 9.6 (9.3)
SMC (n=160): 128 (80%)... 12.3 (13.7)

post:
APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)
CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)
GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)
SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)

Mean number of contacts for APT and SMC patients have now risen slightly compared to pre-trial. Number of contacts for CBT and GET have now dropped by about a one fewer contact per patient than with the other calculation. GET now changes due to trial by less than one contact per patient, and CBT by only one and one half. I wonder what that does to the cost effectiveness calculation?

Tricksy.

I don't know if SD changes due to sample size being different.

ETA:

comparing differences to favored treatment, CBT, to see what changes:

pre:
APT (n=159): 118 (74%)... 11.5 (11.1)... +1.1
CBT (n=161): 106 (66%)... 10.4 (8.3)... tare
GET (n=160): 120 (75%)... 9.6 (9.3)... -0.8
SMC (n=160): 128 (80%)... 12.3 (13.7)... +1.9

post compared to post:
APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)... +3.0 +3.1
CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)... tare
GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)... -0.3 -0.1
SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)... -3.4 +3.4

does appear to affect frequency of interactions, although I couldn't tell you whether this is statistically significant (I forgot how to do these things). other than the reason above, another potential confounding factor could be that the more weak patients might have dropped out of CBT and especially GET.

The SD fell across all intervention groups, which could poentially be accounted for by weaker patients dropping out, especially if it's claimed some are improving. If 20% are improving and the others are not, the SDs should be getting bigger, right?

remind me to check other entries in the table and other tables, some other day...
 
Messages
15,786
also wondering why the contacts per patient was calculated by the original group sizes instead of by the group sizes at 12 months (or possibly, by the number of patients using that service?)

Nicely spotted ... where the hell did they learn math that they think it's okay to get an average by dividing the total amount of contacts by the "number in the group" + "some extra random number"? They've made a very basic error, which might look very bad for them, especially if it's pointed out in a letter to be published or brought to the publisher's attention if it's too late for responses to the paper.
 

Dolphin

Senior Member
Messages
17,567
also wondering why the contacts per patient was calculated by the original group sizes instead of by the group sizes at 12 months (or possibly, by the number of patients using that service?)

If I recalculate the mean using the group sizes at 12 months, the table looks like this:

pre:
APT (n=159): 118 (74%)... 11.5 (11.1)
CBT (n=161): 106 (66%)... 10.4 (8.3)
GET (n=160): 120 (75%)... 9.6 (9.3)
SMC (n=160): 128 (80%)... 12.3 (13.7)

post:
APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)
CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)
GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)
SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)

Mean number of contacts for APT and SMC patients have now risen slightly compared to pre-trial. Number of contacts for CBT and GET have now dropped by about a one fewer contact per patient than with the other calculation. GET now changes due to trial by less than one contact per patient, and CBT by only one and one half. I wonder what that does to the cost effectiveness calculation?

Tricksy.

I don't know if SD changes due to sample size being different.

ETA:

comparing differences to favored treatment, CBT, to see what changes:

pre:
APT (n=159): 118 (74%)... 11.5 (11.1)... +1.1
CBT (n=161): 106 (66%)... 10.4 (8.3)... tare
GET (n=160): 120 (75%)... 9.6 (9.3)... -0.8
SMC (n=160): 128 (80%)... 12.3 (13.7)... +1.9

post compared to post:
APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)... +3.0 +3.1
CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)... tare
GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)... -0.3 -0.1
SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)... -3.4 +3.4

does appear to affect frequency of interactions, although I couldn't tell you whether this is statistically significant (I forgot how to do these things). other than the reason above, another potential confounding factor could be that the more weak patients might have dropped out of CBT and especially GET.

The SD fell across all intervention groups, which could poentially be accounted for by weaker patients dropping out, especially if it's claimed some are improving. If 20% are improving and the others are not, the SDs should be getting bigger, right?

remind me to check other entries in the table and other tables, some other day...
Just looking at this quickly. I'm not sure if this would an example where one would use "last value carried forward"/similar. This can mean that if you don't have the figures for certain individuals on completion, you use the last value you have for them e.g. the baseline score.
 
Messages
15,786
Just looking at this quickly. I'm not sure if this would an example where one would use "last value carried forward"/similar. This can mean that if you don't have the figures for certain individuals on completion, you use the last value you have for them e.g. the baseline score.
That wouldn't make much sense though, since the point is to evaluate changes in how much services CF patients are using. And if you're averaging, then resorting to old data when new data is missing would just make an unnecessary mess of things - far more logical to just use the new data. If anything, it would seem more appropriate to exclude the old data from the drop outs from final calculations, to evaluate change.

I'm also not sure if reusing old data when new data is missing could account for the changes. SMC, for example, had no encouragement to stop seeing doctors in any form, and if using Willow's calculations their doctor visits stayed the same instead of decreasing.

Of course, they'll never release the actual numbers, so no one will ever be able to check. Convenient! :cautious:
 

Dolphin

Senior Member
Messages
17,567
Dolphin said:
Just looking at this quickly. I'm not sure if this would an example where one would use "last value carried forward"/similar. This can mean that if you don't have the figures for certain individuals on completion, you use the last value you have for them e.g. the baseline score.
That wouldn't make much sense though, since the point is to evaluate changes in how much services CF patients are using. And if you're averaging, then resorting to old data when new data is missing would just make an unnecessary mess of things - far more logical to just use the new data.
I'm not sure how this is different from other cases where "last value carried forward" is used with missing data in outcome measures.

I'm not sure the rationale for it - perhaps it is that if there are more parts of missing data in some arms than others, it is suspicious that maybe the results were worse.
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
Just looking at this quickly. I'm not sure if this would an example where one would use "last value carried forward"/similar. This can mean that if you don't have the figures for certain individuals on completion, you use the last value you have for them e.g. the baseline score.

so they might have a legitimate reason for using the original figures. Ok, thanks. As Valentijn said, this makes number of visits dicey (maybe that's why CBT and GET decreased more; fewer patients... but it's hard to tell for sure), but it's difficult to know how to compensate for dropouts in any case.
 

Dolphin

Senior Member
Messages
17,567
so they might have a legitimate reason for using the original figures. Ok, thanks. As Valentijn said, this makes number of visits dicey (maybe that's why CBT and GET decreased more; fewer patients... but it's hard to tell for sure), but it's difficult to know how to compensate for dropouts in any case.
No, it doesn't look like they used "last value carried forward" because if they did, the sample sizes would be the same at both timepoints. What I was trying to say was this is what they could have done to deal with missing data (but didn't).
 

Tom Kindlon

Senior Member
Messages
1,734
Some correspondence/e-letters between me and the lead author of this paper:

http://www.plosone.org/annotation/listThread.action?root=76469

Title: The statistical plan also mentioned analysing the data based on zero cost for informal care. This is not mentioned.

Posted by tkindlon on 23 Dec 2013 at 20:11 GMT

SMcGrath has highlighted the results would look considerably different if informal care was calculated at minimum wage, a view the lead author has essentially agreed with[1,2].

Just to point out that the statistical plan mentioned another analysis, zero cost for informal care[3]:

"The main analyses will use an informal care unit cost based on the replacement method (where the cost of a homecare worker is used as a proxy for informal care). We will alternatively use a zero cost and a cost based on the national minimum wage for informal care."

Such a result would make a bigger difference again to the societal cost estimates.

This makes the authors' claim about sensitivity analyses seem particularly odd[4]:

"Fourth, we made assumptions regarding the value of unpaid care from family and friends and lost employment. However, sensitivity analyses revealed that the results were robust for alternative assumptions."

References:

[1]. SMcGrath. Can the authors show the Sensitivity Analysis results for Societal Benefits for CBT & GET?http://www.plosone.org/an...

[2] McCrone P. RE: Can the authors show the Sensitivity Analysis results for Societal Benefits for CBT & GET?http://www.plosone.org/an...

[3] Walwyn R, Potts L, McCrone P, Johnson AL, Decesare JC, Baber H, Goldsmith K, Sharpe M, Chalder T, White PD. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE):

statistical analysis plan. Trials. 2013 Nov 13;14:386. doi:

10.1186/1745-6215-14-386.

[4] McCrone P, Sharpe M, Chalder T, Knapp M, Johnson AL, Goldsmith KA, White PD. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost-effectiveness analysis. PLoS One. 2012;7(8):e40808.

Competing interests declared: I work in a voluntary capacity for the Irish ME/CFS Association

------------

Title: RE: The statistical plan also mentioned analysing the data based on zero cost for informal care. This is not mentioned.

spjupmc replied to tkindlon on 01 Jan 2014 at 16:16 GMT

If a smaller unit cost for informal care is used, such as the minimum wage rate, then there would remain a saving in informal care costs in favour of CBT and GET but this would clearly be less than in the base case used in the paper. If a zero value for informal care is used then the costs are based entirely on health/social care (which were highest for CBT, GET and APT) and lost employment which was not much different between arms. In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial. Likewise, to assume it only has the value of the minimum wage is also very restrictive. In other studies we have costed informal care at the high rate of a home care worker. If we do this then this would show increased savings shown for CBT and GET.

Competing interests declared: I am the lead author of the paper.



--------------------

Title: It was the investigators themselves that chose the alternative assumptions (i.e. a zero cost rate or national minimum wage rate for informal care)

tkindlon replied to spjupmc on 15 Feb 2014 at 20:31 GMT

I would like to thank the lead author for replying. I imagine as a member of the team with technical expertise, there is a chance he may have been directed by others into how this trial was reported and what language was used.

To get back to the specifics: nothing he has said has justified what they claimed in the paper:

"However, sensitivity analyses revealed that the results were robust for alternative assumptions."

The two alternative assumptions mentioned in the statistical plan were "a zero cost and a cost based on the national minimum wage for informal care." The results are not robust for such scenarios.

Remember it was the investigators themselves that chose the alternative assumptions. If it's "controversial" now to value informal care at zero value, it was similarly "controversial" when they decided before the data was looked at, to analyse the data in this way.

There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented. If a drug company behaved in this way, it would be frowned on. Unfortunately, there is less scrutiny of the reporting of trials of non-pharmacological interventions, despite the fact that some of those involved in such trials can have their own competing interests. This may be particularly so in the field of ME/CFS where standards seem to be particularly lax e.g. [1].

Medicine is a serious business: people's health and lives depend on it. Some treatments will be offered, and others not offered, due to papers such as this. This will often involve the expenditure of considerable amounts of taxpayers' money. Investigators should not mislead readers.

Again, I thank this particular author for replying but I think it would have been better if the paper itself, which I imagine will be the main focus in future reviews and analyses, had not reported the results in such a way.

References:

1. White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med. 2013 Oct;43(10):2227-35. doi: 10.1017/S0033291713000020.

Competing interests declared: I work in a voluntary capacity for the Irish ME/CFS Association
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
@Tom Kindlon

I don't understand: what was informal care as a cost being used for: a replacement for the cost of a therapist for CBT and GET?

And if in the paper this cost was zero, is White saying, that even if they had used minimum wage, or an average of therapist's wages, that GET and CBT would still have shown benefit?

Kind of then begs the question, why not simply show the costs of therapy - doesn't it?

Thanks.
 

Tom Kindlon

Senior Member
Messages
1,734
@Tom Kindlon

I don't understand: what was informal care as a cost being used for: a replacement for the cost of a therapist for CBT and GET?

And if in the paper this cost was zero, is White saying, that even if they had used minimum wage, or an average of therapist's wages, that GET and CBT would still have shown benefit?

Kind of then begs the question, why not simply show the costs of therapy - doesn't it?

Thanks.
Firstly, the costs of the therapies were given.

Bodies such as NICE will often analyse the costs of a therapy against the savings it gives. In this case, the only savings were in terms of patients with CBT and GET reporting they required less informal care for patients (the investigators probably imagined that there would be less sickness absence from work but this didn't happen). Depending on how one values such care, one can then decide whether one considers the therapies worth paying for or not. The authors mentioned in the statistical plan they would look at home care in three ways:
"The main analyses will use an informal care unit cost based on the replacement method (where the cost of a homecare worker is used as a proxy for informal care). We will alternatively use a zero cost and a cost based on the national minimum wage for informal care."

However, when they published the paper, they only looked at one value, value the cost of the informal care at £14.60 per hour.

They claimed in the paper: "However, sensitivity analyses revealed that the results were robust for alternative assumptions."
However, this is simply not true for the two other scenarios mentioned in the statistical plan i.e. using "a zero cost and a cost based on the national minimum wage for informal care".