• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Discussion of Response Bias (2011) - relevant to CFS trials

oceanblue

Guest
Messages
1,383
Location
UK
Response bias occurs when patients in a clinical trial - which may include CFS trials - artificially inflate their scores to please researchers. It is recognised as a potential problem, but there isn't a lot of good research on it. A recent paper from placebo expert Asbjorn Hrobjartsson discusses some of the issues with response bias, as it can artificially inflate the response to placebo treatment too.

Placebo effect studies are susceptible to response bias and to other types of biases.
Hrbjartsson A, Kaptchuk TJ, Miller FG. (2011)

Abstract (highlights)

OBJECTIVE: Investigations of the effect of placebo are often challenging to conduct and interpret. The history of placebo shows that assessment of its clinical significance has a real potential to be biased. We analyze and discuss typical types of bias in studies on placebo.

...RESULTS: The inherent nonblinded comparison between placebo and no-treatment is the best research design we have in estimating effects of placebo, both in a clinical and in an experimental setting, but the difference between placebo and no-treatment remains an approximate and fairly crude reflection of the true effect of placebo interventions. A main problem is response bias in trials with outcomes that are based on patients' reports...

CONCLUSIONS:Creative experimental efforts are needed to assess rigorously the clinical significance of placebo interventions and investigate the component elements that may contribute to the therapeutic benefit.

Background on placebo studies

Hrobjartsson famously demonstrated that placebo effects in general have been greatly overestimated. The critical mistake was to assume all improvement from baseline in the placebo group of clinical trials was due to the placebo itself, when there are other explanations including natural improvement and regression to the mean. By comparing placebo groups with more appropriate groups eg waiting list or 'no treatment' groups, he showed that placebos did not generally produce powerful clinical effects. These findings were broadly confirmed in even larger meta-analyses, most recently in 2010.

Nonetheless, the placebo effect is not dead, as he points out in this paper:

it is unwarranted to conclude that placebo interventions are incapable of producing clinically meaningful benefit.

...the meta-analyses [13-15] identified several well-designed clinical trials with relatively large analgesic effects of placebo, and a general tendency for effects on patient-reported continuous outcomes.
The issue with patient-reported outcomes is where response bias comes in, and is particularly relevant to CFS studies where almost all outcomes are self-reported. The general problem is set up in this paper as:
The challenge in rigorously assessing the clinical benefit of placebo interventions is to reliably distinguish the magnitude of any real effect of placebo from the noise embedded in the human interaction of an experiment or a clinical trial.
I've picked out key points from the discussion of response bias:
The conundrum of response bias

The assessment of the placebo effect faces a basic conundrum. Patients may desire to please the researcher, or just give a correct or expected answer that fits with the experimental situation [19,22]. When patients report that they feel better after receiving a placebo intervention how do we know to which degree this reflects genuine symptomatic improvement, such as pain relief, that can be attributed to the placebo effect or a response bias? ...Conversely, those who did not receive any study intervention might be disappointed and disposed to report negative or correct outcomes.

...blinded placebo-controlled trials are able to discriminate real effects from response bias... ...as long as the masking conditions were successful. Response bias may operate to inflate the apparent drug effect (the difference between pretrial baseline and the time of study outcome measurement); likewise, it may account for all or part of the response in the placebo arm. However, in view of randomization and blinding conditions, there is no reason to infer that the effect of response bias is greater in one arm than the other.
Of course, in non-blinded and non-placebo-controlled studies, such as the PACE Trial, this doesn't apply and response bias can indeed distort results. PACE relied on therapy with an average of 13 hours of contact between therapist and patient, and a typical patients-therapist relationship that was independently assessed as 'very strong'. Such a strong patient-provider relationship is likely to be a causal factor in response bias, as this paper spells out.
Another important aspect of response bias is that it is likely to be closely associated with the same causal factors hypothesized to cause placebo effects: a warm patient-provider interaction and the doctors verbal and nonverbal suggestion of an important beneficial treatment effect. Thus, the more a physician signals friendliness and confident expectation of improvement, the less likely is the patient to disappoint the doctor who is making such an effort. Recent qualitative studies of patients in randomized clinical trials have demonstrated that patients can become dramatically attached to the research team and very committed to the success of a trial [23].

[23] Kaptchuk TJ, Shaw J, Kerr CE, Conboy LA, Kelley JM, Lembo AJ, et al. Maybe I made up the whole thing: placebos and patients experiences in a randomized controlled trial. Cult Med Psychiatry 2009;33:382-412.
 

Esther12

Senior Member
Messages
13,774
Thanks OB.

LOL at how well this seems to fit CBT and subjective outcome measures:

a warm patient-provider interaction and the doctors verbal and nonverbal suggestion of an important beneficial treatment effect. Thus, the more a physician signals friendliness and confident expectation of improvement, the less likely is the patient to disappoint the doctor who is making such an effort. Recent qualitative studies of patients in randomized clinical trials have demonstrated that patients can become dramatically attached to the research team and very committed to the success of a trial
 

Enid

Senior Member
Messages
3,309
Location
UK
Human nature really - and reflects the kind attitude of trial patients towards those experts (?) trying to aid them I guess. (I did with my Docs until wise to their ignorance ..... after some time).

Dodgy interviewee then. LOL.
 

Valentijn

Senior Member
Messages
15,786
oceanblue said:
Of course, in non-blinded and non-placebo-controlled studies, such as the PACE Trial, this doesn't apply and response bias can indeed distort results. PACE relied on therapy with an average of 13 hours of contact between therapist and patient, and a typical patients-therapist relationship that was independently assessed as 'very strong'. Such a strong patient-provider relationship is likely to be a causal factor in response bias, as this paper spells out.

I'd imagine the problem is even worse in the Dutch studies, since their CBT for CFS manual is specifically aimed at getting the patient to see themselves as a non-patient (not sick), and to attribute PEM symptoms to other causes ("normal" fatigue, the flu, etc). It's not just about pleasing the therapist, it's about being deliberately brain-washed to see yourself as healthy.
 
Messages
75
I'm currently wearing a full time activity monitor to record my actual amount of activity prior to treatment. It will be a little difficult to inflate my results when my activity level is so closely related to "how I feel". I have to wear it at least 23 hours a day and it records time sleeping, inactive, walking and very active. When I plug it in my PC, the people at the OMI can see the uploaded data. This seems like a sensible way of getting real quantitative data that is closely tied to how I feel that hopefully will improve at some point during treatment.
 

Esther12

Senior Member
Messages
13,774
It will be a little difficult to inflate my results when my activity level is so closely related to "how I feel".

Yeah. There's still a potential problem if patients are encouraged to shift resources from intellectual challenges to physical activity, or just pushing themselves more despite feeling consistently worse from it (unlikely to be sustainable in the long run), but something like actometers would still be a really useful way of assessing the impact of treatments for CFS.