• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Economic evaluation of multidisciplinary rehabilitation treatment versus CBT for patients with CFS

hixxy

Senior Member
Messages
1,229
Location
Australia
PLoS One. 2017 Jun 2;12(6):e0177260. doi: 10.1371/journal.pone.0177260. eCollection 2017.

Economic evaluation of multidisciplinary rehabilitation treatment versus cognitive behavioural therapy for patients with chronic fatigue syndrome: A randomized controlled trial.

Vos-Vromans D, Evers S, Huijnen I, Köke A, Hitters M, Rijnders N, Pont M, Knottnerus A, Smeets R.

Abstract

BACKGROUND:

A multi-centre RCT has shown that multidisciplinary rehabilitation treatment (MRT) is more effective in reducing fatigue over the long-term in comparison with cognitive behavioural therapy (CBT) for patients with chronic fatigue syndrome (CFS), but evidence on its cost-effectiveness is lacking.

AIM:
To compare the cost-effectiveness of MRT versus CBT for patients with CFS from a societal perspective.

METHODS:
A multi-centre randomized controlled trial comparing MRT with CBT was conducted among 122 patients with CFS diagnosed using the 1994 criteria of the Centers for Disease Control and Prevention and aged between 18 and 60 years. The societal costs (healthcare costs, patient and family costs, and costs for loss of productivity), fatigue severity, quality of life, quality-adjusted life-year (QALY), and cost-effectiveness ratios (ICERs) were measured over a follow-up period of one year. The main outcome of the cost-effectiveness analysis was fatigue measured by the Checklist Individual Strength (CIS). The main outcome of the cost-utility analysis was the QALY based on the EuroQol-5D-3L utilities. Sensitivity analyses were performed, and uncertainty was calculated using the cost-effectiveness acceptability curves and cost-effectiveness planes.

RESULTS:
The data of 109 patients (57 MRT and 52 CBT) were analyzed. MRT was significantly more effective in reducing fatigue at 52 weeks. The mean difference in QALY between the treatments was not significant (0.09, 95% CI: -0.02 to 0.19). The total societal costs were significantly higher for patients allocated to MRT (a difference of €5,389, 95% CI: 2,488 to 8,091). MRT has a high probability of being the most cost effective, using fatigue as the primary outcome. The ICER is €856 per unit of the CIS fatigue subscale. The results of the cost-utility analysis, using the QALY, indicate that the CBT had a higher likelihood of being more cost-effective.

CONCLUSIONS:
The probability of being more cost-effective is higher for MRT when using fatigue as primary outcome variable. Using QALY as the primary outcome, CBT has the highest probability of being more cost-effective.

https://www.ncbi.nlm.nih.gov/pubmed/28574985
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0177260
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
These people need to look up Randomised Controlled Trial in the dictionary... Unblinded trials are not "Randomised Controlled Trials". There is no control for the various reporting biases.

The question arises whether our findings reflect an absence of a clinically significant treatment effect or, alternatively, a lack of sensitivity of the generic quality of life measures to detect a clinically meaningful improvement in patients with CFS.

The former. The fatigue questionnaires are oversensitive to various reporting biases in unblinded trials. There were no significant differences between groups in the productivity losses (employment related), though they suspiciously don't provide much of the details here, eg hours worked/absenteeism rates.
 
Last edited:

trishrhymes

Senior Member
Messages
2,158
Yet another study defining CFS as 6 months unexplained fatigue, unblinded, no control group, and with the odd outcome that one method is better at reducing fatigue and the other better at improving quality of life. Since the outcome measures are subjective, I suspect this difference is an artefact of influence of focus of treatment method on how patients filled in questionnaires.

All the failings of PACE all over again.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Yet another study defining CFS as 6 months unexplained fatigue, unblinded, no control group, and with the odd outcome that one method is better at reducing fatigue and the other better at improving quality of life. Since the outcome measures are subjective, I suspect this difference is an artefact of influence of focus of treatment method on how patients filled in questionnaires.

They did use actigraphy (data provided in original study). Activity differences between groups, or between baseline and 52 weeks were not significant. (P>0.05)
 

trishrhymes

Senior Member
Messages
2,158
They did use actigraphy (data provided in original study). Activity differences between groups, or between baseline and 52 weeks were not significant. (P>0.05)

Ah, thanks, I missed that.

So objective measure shows no difference. Which means the only difference in cost effectiveness is the different cost of the 2 treatments. And since there was no control group, no one knows whether the treatments were any better than doing nothing -or indeed worse. What a waste of time effort and money.

Why do they keep doing these useless unscientific trials?

:(:bang-head:
 

RogerBlack

Senior Member
Messages
902
Yet another study defining CFS as 6 months unexplained fatigue,
Err.
CFS diagnosed using the 1994 criteria of the Centers for Disease Control and Prevention
https://www.cdc.gov/cfs/case-definition/1994.html
  1. The individual has severe chronic fatigue for 6 or more consecutive months that is not due to ongoing exertion or other medical conditions associated with fatigue (these other conditions need to be ruled out by a doctor after diagnostic tests have been conducted)
  2. The fatigue significantly interferes with daily activities and work
  3. The individual concurrently has four or more of the following symptoms:
    • post-exertion malaise lasting more than 24 hours
    • unrefreshing sleep
    • significant impairment of short-term memory or concentration
    • muscle pain
    • pain in the joints without swelling or redness
    • headaches of a new type, pattern, or severity
    • tender lymph nodes in the neck or armpit
    • a sore throat that is frequent or recurring
This - while not perhaps the absolute best definition - is a hell of a lot better than '6 months of fatigue'.

I am assuming for the moment that the abstract is accurate.
 
Messages
724
Location
Yorkshire, England
Cost effectiveness seems a bit of a euphemism. It could matter to a private provider, as one paid for by the state also brings in societal benefits, such as employment to those providing the service, and the economic activity that their wages create.

Unless the society is worried that there are too many doctors, nurses, etc in employment, (which is highly unlikely)

It always seems so narrowly defined in these type of papers.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Why do they keep doing these useless unscientific trials

Simple answer: they aren't aware of how useless their methodology is and they are still able to convince others to get funding and to publish, because the standards in their field are so low.

Alternative answer: they don't care, they're only interested in doing the minimum to get articles published, rather than abide by the general principles of high quality science (eg test your hypothesis in the strongest ways you can practically carry out). Given that other fields of psychology have measured how easy it is to bias results on questionnaires (is a key problem in uncontrolled, eg unblinded studies), it seems inexplicable that such health psychologists seem to be unaware of this fact.

Additional answer: they don't bother talking to patients about what outcome measures are most reliable, nor build models of human needs and measure what needs are both unfulfilled and patients wish to be fulfilled.
 

RogerBlack

Senior Member
Messages
902
Unless the society is worried that there are too many doctors, nurses, etc in employment, (which is highly unlikely)

It always seems so narrowly defined in these type of papers.

The societal costs (healthcare costs, patient and family costs, and costs for loss of productivity), fatigue severity, quality of life, quality-adjusted life-year (QALY), and cost-effectiveness ratios (ICERs) were measured over a follow-up period of one year.
- again - from the abstract.

This seems a reasonable statement of an attempt at measuring whole societal impact, rather than a very narrow view.
(I am not addressing the likely failures due to the poor questionaires).

More cost-effectiveness papers need to do this, rather than to do more simple-minded measures. (and of course do it effectively, QALY improvements mediated by the change in the way you answer questionaires aren't meaningful)
 
Messages
58
The fundamental hypothesis is flawed. If neither method is consistently effective, then the default best treatment for a cost effectiveness analysis is, quite literally, do nothing. It can only become useful when comparing two proven effective treatments. Concluding that CBT is more likely to be cost effective using QALY is disingenuous, akin to saying throwing $500 into a hole is better than throwing $5000 into a hole, because at the end of the day you're down less money. The only legitimate conclusion is that there is insufficient effect of either treatment for a cost effectiveness evaluation between the two to be relevant.
 

RogerBlack

Senior Member
Messages
902
The fundamental hypothesis is flawed. If neither method is consistently effective, then the default best treatment for a cost effectiveness analysis is, quite literally, do nothing. It can only become useful when comparing two proven effective treatments.

The hypothesis is not flawed. In that it is in principle possible to measure.
The fact that it's hard to measure, and this paper seems to be measuring metrics that mostly mean nothing is quite another thing.

Done right, these are the only sane sorts of metrics for working out if a treatment is worth providing.

The issue is, with proper measurements, CBT should likely come out as having infinite or negative 'cost' by these metrics.
That is - for every pound you put into CBT, the patient gets worse. (or there is no change, = infinite)

I need to dig into this paper properly, there could be interesting data if analysed in a different way.
 

RogerBlack

Senior Member
Messages
902
Some concerning quotes.
Study design and participants
"Other inclusion criteria were: a CIS fatigue subscale score of 40 or more"...
For the cost-effectiveness analysis, the primary outcome is fatigue severity, which was measured by the CIS fatigue subscale (CIS fatigue; score ranging from 8–56, lower scores indicate less fatigue) [19]. In the cost-effectiveness analyses the CIS fatigue scores were recoded: a higher score indicates a more positive effect (less fatigue). In using the CIS fatigue subscale, there are different methods for defining improvement. One is to change the CIS score to a dichotomous variable of improvement (CIS improvement). A score lower than 35 on the CIS fatigue subscale was labelled as improved [25]. A higher score was labelled as not improved.

I think this is counting as 'improved' people who are ridiculously not improved.
On a fatigue scale from 8-56, 35 is likely to be very disabled.

I think this study is basically hopelessly compromised for its intended purposes, as pretty much expected (they quote PACE without any caveats as an indicator).
The data could still be interesting though.
 
Messages
724
Location
Yorkshire, England
- again - from the abstract.

This seems a reasonable statement of an attempt at measuring whole societal impact, rather than a very narrow view.
(I am not addressing the likely failures due to the poor questionaires).

More cost-effectiveness papers need to do this, rather than to do more simple-minded measures. (and of course do it effectively, QALY improvements mediated by the change in the way you answer questionaires aren't meaningful)

My point is that cost effectiveness analysis doesn't address the hidden political/monetary assumptions it is built upon. It addresses societal impact from a very narrow point of view, and the assumptions are fallacious, so it's all a castle made of sand.
 
Messages
58
The hypothesis is not flawed. In that it is in principle possible to measure.
The fact that it's hard to measure, and this paper seems to be measuring metrics that mostly mean nothing is quite another thing.

I'll stand by my original statement. The aim states that MRT is more effective over the long term than CBT. The unspoken assumptions, then, are 1) that CBT is an effective treatment and that the objective measures are of a degree that 2) a cost effectiveness analysis can be meaningfully undertaken. This seems like a fundamental flaw in the base hypothesis to me. The ability to take and compare measures was never in dispute, but that's approach, not hypothesis.
 

trishrhymes

Senior Member
Messages
2,158
@RogerBlack, re definition of CFS used, I was quoting the first line of the introduction of the paper, not the abstract. The introduction clearly states that CFS is defined as 6 months medically unexplained fatigue that 'often' has effects on quality of life etc. No mention of needing 4 other symptoms, no mention of what those symptoms are.

I agree this contradicts the abstract. Draw your own conclusions!
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
They might be pulling a Crawley: claim it fits a criteria for purposes of funding and drawing broad conclusions, but ignore core requirements of that criteria.
[Sarcasm] I believe that gutting a definition so that it no longer makes sense but is easy to use is called "operationalization". It reminds me of the infamous attempt to pass a bill to make pi equal to 3, for simplicity.
 

Effi

Senior Member
Messages
1,496
Location
Europe
I looked up the trial registration number of this study and what came up was the information of a lengthy study project called FatiGo (another great acronym for the list). The funding came from a charity that is run by Delta Lloyd (big insurance company). This was the study protocol: https://www.ncbi.nlm.nih.gov/pubmed/22647321

The study in OP looks like an extension of the previous study. Same team, same name, same funding source, etc. https://kenniscentrum.adelante-zorggroep.nl/en/research-programme/projects-2014/fatigo/
 
Last edited:

trishrhymes

Senior Member
Messages
2,158
Thanks, @Effi. I guess, like PACE, the authors are milking this trial for as many papers as they can to add to their CVs and spinning it out over several years to make themselves look busy.. They could have reported everything in a single paper. It's about career building, not integrity.
 

Dolphin

Senior Member
Messages
17,567
They did use actigraphy (data provided in original study). Activity differences between groups, or between baseline and 52 weeks were not significant. (P>0.05)
Yes:
There is one set of objective measurements from an actometer:

Multidisciplinary rehabilitation treatment
Physical activity
Baseline 206233.65 (40264.16)
26 weeks 227283.24 (45698.55)
52 weeks 218214.41 (48564.30)
So a 5.8% increase.

cognitive behavioural therapy
Physical activity
Baseline 202033.66 (43379.41)
26 weeks 210019.75 (48068.09)
52 weeks 215262.14 (57074.22)
So a 6.55% increase
http://forums.phoenixrising.me/inde...herapy-for-cfs-vos-vromans.39566/#post-635163


It would be interesting if they did cost-effectiveness studies based on improvements in actometer readings rather than changes in subjective outcome measures.