• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Bias due to lack of patient blinding in clinical trials. A systematic review of...

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
This is an open access study from 2014:
http://ije.oxfordjournals.org/content/43/4/1272.full

Bias due to lack of patient blinding in clinical trials. A systematic review of trials randomizing patients to blind and nonblind sub-studies
  1. Asbjørn Hróbjartsson1,*,
  2. Frida Emanuelsson1,
  3. Ann Sofia Skou Thomsen1,
  4. Jørgen Hilden2 and
  5. Stig Brorson3
Accepted May 7, 2014.

Abstract
Background: Blinding patients in clinical trials is a key methodological procedure, but the expected degree of bias due to nonblinded patients on estimated treatment effects is unknown.

Methods: Systematic review of randomized clinical trials with one sub-study (i.e. experimental vs control) involving blinded patients and another, otherwise identical, sub-study involving nonblinded patients. Within each trial, we compared the difference in effect sizes (i.e. standardized mean differences) between the sub-studies. A difference <0 indicates that nonblinded patients generated a more optimistic effect estimate. We pooled the differences with random-effects inverse variance meta-analysis, and explored reasons for heterogeneity.

Results: Our main analysis included 12 trials (3869 patients). The average difference in effect size for patient-reported outcomes was –0.56 (95% confidence interval –0.71 to –0.41), (I2 = 60%, P  = 0.004), i.e. nonblinded patients exaggerated the effect size by an average of 0.56 standard deviation, but with considerable variation. Two of the 12 trials also used observer-reported outcomes, showing no indication of exaggerated effects due lack of patient blinding. There was a larger effect size difference in 10 acupuncture trials [–0.63 (–0.77 to –0.49)], than in the two non-acupuncture trials [–0.17 (–0.41 to 0.07)]. Lack of patient blinding also increased attrition and use of co-interventions: ratio of control group attrition risk 1.79 (1.18 to 2.70), and ratio of control group co-intervention risk 1.55 (0.99 to 2.43).

Conclusions: This study provides empirical evidence of pronounced bias due to lack of patient blinding in complementary/alternative randomized clinical trials with patient-reported outcomes.

Bias mechanisms
In trials with patient-reported outcomes, bias due to the lack of patient blinding is mainly caused by a combination of response bias (i.e. a tendency for patients to report symptoms in a way they think is expected), placebo effect, differential attrition and differential co-intervention. Further causes of concern in some trials is treatment switches, i.e. patients allocated to the experimental intervention received the control intervention (or vice versa), or contamination, i.e. patients received unintended experimental treatment.

Blinding patients is likely also important for trials with observer-reported outcomes, but the empirical evidence is less clear and the degree of bias may be less pronounced. Patient blinding is sometimes not possible, for example in trials of exercise, surgery or psychotherapy, or not regarded appropriate in trials with a predominantly pragmatic aim. Our results provide a tentative empirical framework for interpretation of results from such trials.
 

SOC

Senior Member
Messages
7,849
Patient blinding is sometimes not possible, for example in trials of exercise, surgery or psychotherapy,
How convenient for them. :rolleyes:

You would think good research practice would require tighter p-values and other indicators when blinding is not possible since it is known that effect size is biased (exaggerated) without blinding, but noooooo....

In some cases I don't know why they even bother to torture the patients. They might as well just sit in a conference room and make up the data, it's about as accurate as what some of them are doing now. If you can't do good science, you might as well be honest about the fact that your data is crap.