• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Evidence of Study Design Biases in Randomized Trials: Systematic Review of Meta-Epidemiological Stud

Dolphin

Senior Member
Messages
17,567
Free: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0159267

Research Article

Empirical Evidence of Study Design Biases in Randomized Trials: Systematic Review of Meta-Epidemiological Studies
  • Matthew J. Page ,
  • Julian P. T. Higgins,
  • Gemma Clayton,
  • Jonathan A. C. Sterne,
  • Asbjørn Hróbjartsson,
  • Jelena Savović
logo.plos.95.png


Abstract
Objective
To synthesise evidence on the average bias and heterogeneity associated with reported methodological features of randomized trials.

Design
Systematic review of meta-epidemiological studies.

Methods
We retrieved eligible studies included in a recent AHRQ-EPC review on this topic (latest search September 2012), and searched Ovid MEDLINE and Ovid EMBASE for studies indexed from Jan 2012-May 2015.

Data were extracted by one author and verified by another.

We combined estimates of average bias (e.g. ratio of odds ratios (ROR) or difference in standardised mean differences (dSMD)) in meta-analyses using the random-effects model.

Analyses were stratified by type of outcome (“mortality” versus “other objective” versus “subjective”).

Direction of effect was standardised so that ROR < 1 and dSMD < 0 denotes a larger intervention effect estimate in trials with an inadequate or unclear (versus adequate) characteristic.

Results

We included 24 studies.

The available evidence suggests that intervention effect estimates may be exaggerated in trials with inadequate/unclear (versus adequate) sequence generation (ROR 0.93, 95% CI 0.86 to 0.99; 7 studies) and allocation concealment (ROR 0.90, 95% CI 0.84 to 0.97; 7 studies).

For these characteristics, the average bias appeared to be larger in trials of subjective outcomes compared with other objective outcomes.

Also, intervention effects for subjective outcomes appear to be exaggerated in trials with lack of/unclear blinding of participants (versus blinding) (dSMD -0.37, 95% CI -0.77 to 0.04; 2 studies), lack of/unclear blinding of outcome assessors (ROR 0.64, 95% CI 0.43 to 0.96; 1 study) and lack of/unclear double blinding (ROR 0.77, 95% CI 0.61 to 0.93; 1 study).

The influence of other characteristics (e.g. unblinded trial personnel, attrition) is unclear.

Conclusions

Certain characteristics of randomized trials may exaggerate intervention effect estimates.

The average bias appears to be greatest in trials of subjective outcomes.

More research on several characteristics, particularly attrition and selective reporting, is needed.
 

A.B.

Senior Member
Messages
3,780
[Sarcasm] Subjective outcomes have more bias than objective ones? Blinding is important? Isn't this news like a century old? Most of science changed to entirely objective outcomes mid 20th century, its time the rest caught up. Subjective outcomes can help nuance results, but are not reliable in themselves.

There seems to be a larger failure to teach good scientific methods in some parts of medicine and psychology.

If you're cynical like me, you might even say that some fields of study are intentionally avoiding good methods because they cannot produce positive results with them.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
If you're cynical like me, you might even say that some fields of study are intentionally avoiding good methods because they cannot produce positive results with them.
I think that is part of it. I also think that psychiatry has been given a free pass because its difficult to do good objective research in dealing with the brain. Until psychiatry can get over this issue it will only advance slowly if at all. Not all of psychiatric and psychological research has such issues though ... just way too much of it.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
[Sarcasm] Subjective outcomes have more bias than objective ones? Blinding is important? Isn't this news like a century old? Most of science changed to entirely objective outcomes mid 20th century, its time the rest caught up. Subjective outcomes can help nuance results, but are not reliable in themselves.

Indeed. In fact it seems hard to call it news when blinding itself was introduced because the problem was obvious in advance to anyone who had actually tried doing an experiment. It seems a bit like discovering that gardeners get more thorns in their hands if they do not wear gloves - and surprisingly particularly if they are handling roses.

The fact that this gets published seems as A.B. says to be a worrying sign that a large number of people in biomedical science these days do not have a basic education in methodology.

On the other hand it is quite interesting to see the obvious actually being measured. It would have indeed been interesting if it turned out not to be the case - i.e. that there has never been any point in wearing gloves when you prune roses whatever one might think.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
On the other hand it is quite interesting to see the obvious actually being measured.
This is an important point. Much of medicine lacks a sound evidence base, and at least some of that is because things are so darn obvious nobody has bothered. The more the obvious is nailed down and demonstrated the better, as we can have more confidence in even more of medicine at its fundamentals. It must also be kept in mind that every now and then something that is obviously "right" in medicine, and science in general, is disproved.