• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

The effects of therapies for ME and CFS should be assessed using objective measures

Tom Kindlon

Senior Member
Messages
1,734
Free full text: http://www.oatext.com/the-effects-o...ould-be-assessed-using-objective-measures.php

The effects of therapies for Myalgic Encephalomyelitis and chronic fatigue syndrome should be assessed using objective measures
Frank Twisk

ME-de-patiënten Foundation, The Netherlands

DOI: 10.15761/MRI.1000118.



Abstract
There is controversy with regard to therapies proposed to be effective for Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS), especially behavorial therapies: cognitive behavioral therapy (CBT) and graded exercise therapy (GET).

As will be exemplified by the PACE trial and other studies, the positive effects of CBT and GET on subjective measures, e.g. fatigue and physical functioning, are fully determined by the subjective criteria (measures and cut-off thresholds) employed.

Depending on the subjective criteria used ‘recovery’ rates vary from 69% to 7%.

Looking at the objective measures, e.g. work rehabilitation, physical fitness, and activity levels, CBT and GET seem to have a negligible effect or no effect at all.

Trials into proposed therapies for ME and CFS, including CBT, GET, rituximab and rintatolimod, should use objective measures to impartially assess the effectiveness.
 

Dolphin

Senior Member
Messages
17,567
Minor points:
According to the original PACE protocol [24] people were eligible if they met the Oxford criteria for chronic fatigue (CF) [25], implicating a CFB score ≥6 (CFL >12), and the SF-36 PF score was ≤65.
Should be CFL≥12

Moreover the ‘normal values’ defined by the PACE trial [15] (CFL ≤18, SF-36 PF ≥60) don’t come close to the recovery criteria from the protocol (CFB ≤3/CFL ≤9, SF-36 PF ≥85).
Should be CFL ≤17


This finding is reflected by the effects of CBT, GET and SMC on objective measures. CBT and GET had a very small effect on the number of meters walked in 6 minutes [15], largely insufficient to achieve normal levels [28,29], ‘return to employment’, health care usage and social welfare benefits didn’t improve [30], while fitness and perceived exertion during a step test also didn’t substantially improve [31].
This could give the impression that CBT did better than specialist medical care alone on the six minute walking test when in fact there was basically no numerical difference (SMC increased by 1.5 m more with the adjusted figure)
 

Dolphin

Senior Member
Messages
17,567
Nice to see this study being challenged:
According to Knoop et al. [32] 69% of the patients ‘recovered from CFS’ by CBT/GET. ‘Recovery from CFS’ was defined as a CIS F score <35 and a SIP 8 score <700. Since the inclusion criteria (CIS F ≥35 and SIP 8 ≥700) border on the recovery criteria, a minimal improvement was sufficient to be qualified as ‘recovered from CFS’. But the cut-off thresholds for recovery don’t come close to the criteria for “normal fatigue” (CIS F ≤27) and “no disabilities in all domains” (SIP8 ≤203) as defined by the authors. According to Knoop et al. [32] 23% of the patients recovered using the ’“most comprehensive definition”. However this definition of recovery doesn’t include a criterion for SIP 8, one of the two measures to define ‘CFS’. Moreover, all cut-off thresholds (mean + 1 SD) assume normal distributions for CIS F, SF- 36 PF, SF-36 Social Functioning and SF-36 Social Functioning. But, as the authors confirm, the scores are not normally distributed. For that reason (85%) percentiles should be used. If percentiles were used and the SIP 8 score was included recovery rates would drop substantially below 20%. A control group was lacking, but another study by the same group [33] observed a self-rated clinical improvement in 30% of the patients in the non-intervention group.
 

Dolphin

Senior Member
Messages
17,567
I thought I would plug this commentary which never got that much attention possibly because it is not PubMed-listed

http://onlinelibrary.wiley.com/doi/10.1111/cpsp.12042/abstract

Clinical Psychology: Science and Practice


Commentary
Does Cognitive Behavioral Therapy or Graded Exercise Therapy Reduce Disability in Chronic Fatigue Syndrome Patients? Objective Measures Are Necessary
Authors
  • Andrew James Kewley
  • First published: 16 September 2013


Abstract
Clinical trials of cognitive behavioral therapy and graded exercise therapy have consistently demonstrated improvement in self-reported quality of life and improvement of symptoms. However, due to the nature of these therapies, it is not possible to carry out a double-blinded trial design or fully control for reporting biases. Therefore, to make strong claims about efficacy and reductions in disability, objective methods should be used such as neuropsychological testing, actigraphy, and repeat exercise testing.
 

Dolphin

Senior Member
Messages
17,567
Subjective measures are associated with risk of bias (‘researcher allegiance’’) [51], the placebo effect [32] and buy-in effects [52], and other effects related to the nature of these measures.
Reference 52:
J Psychosom Res. 2016 Apr;83:40-5. doi: 10.1016/j.jpsychores.2016.02.004. Epub 2016 Feb 17.
Treatment expectations influence the outcome of multidisciplinary rehabilitation treatment in patients with CFS.
Vos-Vromans DC1, Huijnen IP2, Rijnders LJ3, Winkens B4, Knottnerus JA5, Smeets RJ2.
Author information

Abstract
OBJECTIVE:
To improve the effectiveness of treatment in patients with chronic fatigue syndrome it is worthwhile studying factors influencing outcomes. The aims of this study were (1) to assess the association of expectancy and credibility on treatment outcomes, and (2) to identify baseline variables associated with treatment expectancy and credibility.

METHODS:
122 patients were included in a randomized controlled trial of whom 60 received cognitive behavioural therapy (CBT) and 62 multidisciplinary rehabilitation treatment (MRT). Expectancy and credibility were measured with the credibility and expectancy questionnaire. Outcomes of treatment, fatigue, and quality of life (QoL), were measured at baseline and post-treatment. Multiple linear regressions were performed to analyse associations.

RESULTS:
In explaining fatigue and the physical component of the QoL, the effect of expectancy was significant for MRT, whereas in CBT no such associations were found. The main effect of expectancy on the mental component of QoL was not significant. For credibility, the overall effect on fatigue and the physical component of QoL was not significant. In explaining the mental component of QoL, the interaction between treatment and credibility was significant. However, the effects within each group were not significant. In the regression model with expectancy as dependent variable, only treatment centre appeared significantly associated. In explaining credibility, treatment centre, treatment allocation and depression contributed significantly.

CONCLUSIONS:
For clinical practice it seems important to check the expectations of the patient, since expectations influence the outcomeafter MRT.

Copyright © 2016. Published by Elsevier Inc.

KEYWORDS:
CFS; Credibility; Expectancy; Fatigue; Outcome; Quality of life

PMID:

27020075

DOI:

10.1016/j.jpsychores.2016.02.004
[Indexed for MEDLINE]
I am not sure I have seen "buy-in effects" used in a paper before. I googled it but don't see many results. I think it may refer to patients finding their treatment credible/believing it will work.
 
Last edited:

Londinium

Senior Member
Messages
178
I came to the conclusion some time ago that any study that used subjective measures and not objective as primary outcomes was worthless. The more we can do to get this message out the better.

I can see that approach but I would probably soften it to any trial that isn't adequately supported by objective measures as a secondary outcome is worthless. For example, the RituxME trial is based on self-report as its primary outcome. If it reports a positive outcome supported by actometry data I don't think it would be a worthless result. Whereas if we have a scenario, like PACE, where we have slightly positive self-report data but where objective measures are either not presented or contradict the main finding, then I would agree it's garbage.
 
Messages
2,158
I can see that approach but I would probably soften it to any trial that isn't adequately supported by objective measures as a secondary outcome is worthless. For example, the RituxME trial is based on self-report as its primary outcome. If it reports a positive outcome supported by actometry data I don't think it would be a worthless result. Whereas if we have a scenario, like PACE, where we have slightly positive self-report data but where objective measures are either not presented or contradict the main finding, then I would agree it's garbage.

The key point @Jonathan Edwards made in his PACE article that it's the combination of an unblinded trial with subjective measures that is the problem. The Rituximab trial is a double blind trial, so patients subjective reports can't be influenced by their expectations. In that case subjective measures can be useful and valid.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Yes, one has to be careful not to be too dogmatic about objective measures. Remember that all rheumatoid arthritis trials have primary outcome measures that have a subjective component and therefore can show a statistically significant difference just due to the subjective aspect. Rituximab is used for autoimmune disease on that basis. The proof of concept RA trial had a partially subjective primary outcome measure. But it was double blinded and objective measures were there to back up the primary outcome.

We have debated this before and I would personally like to see standard measure for ME like the ACR criteria used for RA. Both subjective and objective components go in, so that even if you can get a statistically significant difference with subjective features alone it is extremely hard to get a clinically significant difference without both sorts of measure changing substantially. There is no problem with multiple measures because you have a fixed formula for getting a single answer out.
 
Messages
2,391
Location
UK
Reference 52:
I am not sure I have seen "buy-in effects" used in a paper before. I googled it but don't see many results. I think it may referred to patients finding dead treatment credible/believing it will work.
I would think buy-in effects would be inevitable to expect for many such trials. For any project you become wedded to, you effectively become a member of the project team and, human nature being what it is, dearly want that ("your") project to succeed. When answering multiple-choice questionnaires, there will often be uncertainty which answer to choose, and the desire to benefit the project as a whole will very possibly bias which answer you choose, and maybe even bias which range of answers you might select from.
 
Last edited:

user9876

Senior Member
Messages
4,556
Yes, one has to be careful not to be too dogmatic about objective measures. Remember that all rheumatoid arthritis trials have primary outcome measures that have a subjective component and therefore can show a statistically significant difference just due to the subjective aspect. Rituximab is used for autoimmune disease on that basis. The proof of concept RA trial had a partially subjective primary outcome measure. But it was double blinded and objective measures were there to back up the primary outcome.

We have debated this before and I would personally like to see standard measure for ME like the ACR criteria used for RA. Both subjective and objective components go in, so that even if you can get a statistically significant difference with subjective features alone it is extremely hard to get a clinically significant difference without both sorts of measure changing substantially. There is no problem with multiple measures because you have a fixed formula for getting a single answer out.

I think the point is less that about being open label trials but more about differences in treatments and how patients perceive the effects and hence reporting biases. Placebo controlled trials are good because they try to present the same treatment to all patients apart from the active component. Things like PACE set different expectations for their made up therapy (APT) and effectively had a wait-list control (SMC) again setting little expectation of recovery. Could an open label CBT trial be suitably controlled so that the same expectations (for example, to try and change different beliefs) but when I think about it I keep coming back to the two arms would be very close and eventually the same in order to have the same expectations.

I assume that there can be problems with placebo controlled trials in that reactions can be different. If I remember correctly some of those supporting CBT criticized Fluge and Mella's Rituximab trial for this exact reason. Which makes me think that they get more about the measures they use than they would let on.

I'm not keen on the idea that you can pre-pick a measure and use that (subjective or objective) I much prefer the idea of more measures and having to have them correlated. Just a subjective measure (or several) that aren't supported by more objective measures suggests the potential for bias. If you have a set of measures and one doesn't improve then I assume that may say something interesting. I also don't like the idea of effect sizes on single measures; if we have multiple measures we need multivariate effect sizes. Of like with the EQ5d scale have a utility function to combine values in a way that has meaning (unlike the CFQ that combines different forms of fatigue with arbitrary weightings).
 
Messages
2,391
Location
UK
I think the point is less that about being open label trials but more about differences in treatments and how patients perceive the effects and hence reporting biases. Placebo controlled trials are good because they try to present the same treatment to all patients apart from the active component. Things like PACE set different expectations for their made up therapy (APT) and effectively had a wait-list control (SMC) again setting little expectation of recovery. Could an open label CBT trial be suitably controlled so that the same expectations (for example, to try and change different beliefs) but when I think about it I keep coming back to the two arms would be very close and eventually the same in order to have the same expectations.

I assume that there can be problems with placebo controlled trials in that reactions can be different. If I remember correctly some of those supporting CBT criticized Fluge and Mella's Rituximab trial for this exact reason. Which makes me think that they get more about the measures they use than they would let on.

I'm not keen on the idea that you can pre-pick a measure and use that (subjective or objective) I much prefer the idea of more measures and having to have them correlated. Just a subjective measure (or several) that aren't supported by more objective measures suggests the potential for bias. If you have a set of measures and one doesn't improve then I assume that may say something interesting. I also don't like the idea of effect sizes on single measures; if we have multiple measures we need multivariate effect sizes. Of like with the EQ5d scale have a utility function to combine values in a way that has meaning (unlike the CFQ that combines different forms of fatigue with arbitrary weightings).
Trouble is, the treatments themselves are partly about changing expectations, which in some scenarios is a valid treatment option (not the false illness beliefs cr*p aimed at PwME I don't mean, but for example low self esteem etc). The expectation effect is actually part of the active component, so blinding would effectively remove at least part of the active ingredient.

So I think it comes back to the what has been emphasised before, that if you cannot blind subjects to their treatments and expectations of those treatments, then you just have to have objective outcome measures.
 
Last edited:

user9876

Senior Member
Messages
4,556
Trouble is, the treatments themselves are partly about changing expectations, which in some scenarios is a valid treatment option. The expectation effect is actually part of the active component, so blinding would effectively remove at least part of the active ingredient.

So I think it comes back to the what has been emphasised before, that if you cannot blind subjects to their treatments and expectations of those treatments, then you just have to have objective outcome measures.

Yes in my thought process I tried to separate them out but I ended up thinking that you would end up with the same CBT in each arm to set the same expectations. It would be hard to find sensible beliefs to change (trying to change beliefs about the colour of the sky would set the wrong expectations).

I also think objective measures are becoming much easier. If I were to run a trial I would give all participants a fitbit (or similar) for the duration and look at activity changes. (Its not perfect and doesn't measure mental activity) but it would also help with safety by measuring compliance to any activity program. Also a fitbit isn't hard to wear and I think may even be something of a fashion statement.
 
Messages
2,391
Location
UK
Yes in my thought process I tried to separate them out but I ended up thinking that you would end up with the same CBT in each arm to set the same expectations. It would be hard to find sensible beliefs to change (trying to change beliefs about the colour of the sky would set the wrong expectations).

I also think objective measures are becoming much easier. If I were to run a trial I would give all participants a fitbit (or similar) for the duration and look at activity changes. (Its not perfect and doesn't measure mental activity) but it would also help with safety by measuring compliance to any activity program. Also a fitbit isn't hard to wear and I think may even be something of a fashion statement.
I suppose in a way there are different aspects of a trial that you can blind. In the conventional terminology blinding means to insulate the trial's input data from subjective influence. But you can also insulate the outputs from subjective influence - by using objective outcome measures. In a way, using objective outcome measures is a form of blinding.
 

Londinium

Senior Member
Messages
178
The key point @Jonathan Edwards made in his PACE article that it's the combination of an unblinded trial with subjective measures that is the problem. The Rituximab trial is a double blind trial, so patients subjective reports can't be influenced by their expectations. In that case subjective measures can be useful and valid.

Totally agree. Subjective + no blinding = red flag. But I would add an additional red flag where subjectively and objectively measured results are inconsistent with each other (or objective measures in the trial protocol are mysteriously not presented), even in a blinded trial. PACE+FINE had the former and the latter: some subjective 'improvement' but failed miserably on objective stuff such as employment, six-minute walking test.
 

BruceInOz

Senior Member
Messages
172
Location
Tasmania
But remember that in the double blinded comparison of albuterol with placebo for asthma (www.ncbi.nlm.nih.gov/pubmed/21751905) the subjective measure showed 50% improvement for albuterol and 45% for placebo but the objective measure showed 20% improvement for albuterol and 7% for placebo. So even with blinding subjective measures are unreliable. The gold standard should be objective measures.
 

Sean

Senior Member
Messages
7,378
But remember that in the double blinded comparison of albuterol with placebo for asthma (www.ncbi.nlm.nih.gov/pubmed/21751905) the subjective measure showed 50% improvement for albuterol and 45% for placebo but the objective measure showed 20% improvement for albuterol and 7% for placebo. So even with blinding subjective measures are unreliable.
Pity they didn't also have a non-treatment, non-placebo arm to provide an absolute comparison for the two other arms.

I agree that wherever possible objective measures should be used. The relationship between objective and subjective outcomes is important info.
 
Messages
2,391
Location
UK
But remember that in the double blinded comparison of albuterol with placebo for asthma (www.ncbi.nlm.nih.gov/pubmed/21751905) the subjective measure showed 50% improvement for albuterol and 45% for placebo but the objective measure showed 20% improvement for albuterol and 7% for placebo. So even with blinding subjective measures are unreliable. The gold standard should be objective measures.
It maybe depends on what the measurement expectations/interpretations are. The above figures show that people's asthma, measured objectively, showed a tangible improvement compared to no treatment. But it also shows that people's perception of improvement was negligible. But that doesn't necessarily clash, because two quite different things are being measured, and both could well be right. I could easily believe there to be a very non-linear relationship between real improvement of asthma symptoms and our perception of it.

Suppose a thought experiment, where you could just dial into a test subject varying levels of asthma severity, controlled by turning a dial (obvious why I'm suggesting this only as a thought experiment). You start your experiment with full blown asthma symptoms, and ask the subject what their asthma feels like, and plot the dial setting on the x-axis and the subjects perceived asthma severity on the y-axis. You then progressively reduce the dialled in asthma severity, taking lots more readings along the way, effectively moving from right to left along the x-axis. I would be very surprised if the resultant plot was a straight line down to the 0,0 origin. Much more likely I suspect that the y values would stay high for quite a large part of the plot, even though the dialled in values were coming down. I'm not sure, but it may be that until the symptoms drop below a certain level of severity, a sufferer may not perceive much difference. I suspect it would be a sort of S-curve, because at the low end people also probably don't much notice at lower severity levels. But I have to emphasise I have no way of knowing if I'm right here, but I would be amazed if it was a nice straight line relationship - the world rarely works that way, especially between real and perceived.

To me this means that both readings are very possibly perfectly valid, provided no one tries to pretend that the perceived severity readings are synonymous with the actual severity readings and then build treatment regimes based on that falsehood. PACE etc of course tries to insist and mislead that they are synonymous.

I've also seen in recent times the BPS crew suggesting that observing people's perceptions is fine, and is as valid as objective observations even if they do differ significantly, because a person's perceptions of their condition is what really matters and can be deemed the more significant factor in their condition. Only someone high on BPS-brew could really believe that one!