Review: 'Through the Shadowlands’ describes Julie Rehmeyer's ME/CFS Odyssey
I should note at the outset that this review is based on an audio version of the galleys and the epilogue from the finished work. Julie Rehmeyer sent me the final version as a PDF, but for some reason my text to voice software (Kurzweil) had issues with it. I understand that it is...
Discuss the article on the Forums.

"A new generation of bias in EBM" - blog post lists some biases Evidence-based Medicine often misses

Discussion in 'Other Health News and Research' started by Dolphin, Oct 17, 2013.

  1. Dolphin

    Dolphin Senior Member

    alex3619, biophile, WillowJ and 2 others like this.
  2. Esther12

    Esther12 Senior Member


    For some reason, the link seems to have changed.

    I'm actually just going to copy this short blog over, in case the link changes again and causes trouble:

    A new generation of bias in EBM

    In the EUROPA trial 12,218 patients were randomized to receive perindopril or placebo. 9.9% of the participants in the ‘placebo’ group died or had a heart attack, whereas only 8% in the experimental group died or had a heart attack: roughly a 2% absolute effect size. You would have to treat 50 patients with the drug to get one with a positive outcome.​
    According to current EBM standards, the EUROPA study provided very good evidence supporting the effects of perindopril because the study was large, randomized and double-blind. On this basis the authors of the study recommended that “all patients with coronary heart disease” should use the drug.
    However, there are several problems with this and other large studies with small effects that are overlooked by standard EBM critical appraisal methods.

    Exaggerating effect sizes

    The authors of the EUROPA study reported the misleading relative effect size of 20% which sounds much more impressive than 2% and most cannot interpret the difference between relative and absolute risk; my next blog will involve a method that will teach the difference between absolute and relative risk – hopefully which you will never forget.

    The paradox of large studies

    The larger the effect of the treatment, the smaller the required trial: you don’t need thousands of patients to realize that general anesthesia, the Heimlich maneuver, or external defibrillation work. So while large trials sound impressive (and for methodological reasons they are) they indicate that the effect size is small. To be sure small effects are sometimes important, for example when they involve reducing the chances of dying. Yet small apparent effects are also more likely to arise due to hidden biases.

    Publication bias

    Most trials remain unpublished, especially those with negative results. For instance, Turner et al. identified 74 antidepressant trials registered by the FDA. Of the 38 with positive results, 37 were published. Of the 36 with negative/questionable results, only 14 were published. Unpublished studies are notoriously difficult to obtain and are often not included in systematic reviews. This makes the results of systematic reviews questionable. In a real example of how this can influence treatment decisions, Carl Heneghan and colleagues conducted a detailed investigation of the evidence for Tamiflu for preventing and treating influenza in healthy adults. A 2006 Review of the drug concluded it had some effectiveness, and on this basis billions of pounds of taxpayer money were spent on the drugs. However it turned out that the review did not contain all the trials because the sponsor did not release them.
    When all the trials were obtained there the evidence supporting Tamiflu’s benefits questionable and it became clear that side-effects were far more common than was initially believed.

    Conflict of interest.

    Biased researchers can influence study results. Lundh et al. recently found that manufacturing company sponsorship is more likely to lead to favourable results than other studies. In a more dramatic example, Heres et al found that Olanzapine beat risperidone, risperidone beat quetiapine, and quetiapine beat olanzapine.
    What predicted the success? You guessed it, the sponsor.

    Whoever made the drug in the trial got the result they wanted. In both of these examples the industry sponsored research was not lower quality according to standard EBM criteria for appraising evidence. Instead, ‘hidden biases’ had creeped in.

    I am not anti-industry; in fact I know that no study is free from all bias. Yet we have strong evidence that industry sponsored research does exaggerate (their) treatment benefits and this needs to be considered when interpreting industry-sponsored research.

    The EUROPA study revisited

    So how might these hidden biases have influenced the EUROPA trial? James Penston notes the following:
    • All five members of the EUROPA executive committee declared a conflict of interest.
    • 10.5% of the patients in the run-in phase of the trial were excluded, mostly for reasons related to treatment with perindopril.
    • A subsequent study of a similar drug failed to replicate the effect.
    • 23% in the perindopril group dropped out of the trial whereas, 21% in the placebo group dropped out.
    These factors might reasonably lead us to question whether the effects in the EUROPA study are believable. Yet none of the biases discussed here are adequately addressed by common EBM critical appraisal methodology, and something needs to be done about it.
    Bob, Valentijn, biophile and 2 others like this.
  3. anciendaze

    anciendaze Senior Member

    There is a very general problem with most research on health that gets wide publicity. (Here's an example.) Over and over again we hear that exercise is the cure for all kinds of things. Just find me a single such study that tests the hypothesis that what they are measuring is the difference between healthy people, who naturally exercise more, and people with chronic subclinical disease.

    There are many research papers showing advance signs of Alzheimer's, Parkinson's, cardiovascular disease, etc. 10 or 20 years before official onset. Rheumatological disorders also may take years to reach official diagnostic criteria. These are all clear evidence that chronic subclinical disease exists and is widespread in the general population. This has yet to have any impact on those studies that get the publicity.

    Is it implausible that the healthcare system is missing many cases of subclinical disease? Try this well-known infectious disease as an example. Now consider what will happen to anything less clearly identified.

    This is much more widespread than those in the ME/CFS community think. It affects research on the major causes of death and disability in the general population.
    alex3619 likes this.

See more popular forum discussions.

Share This Page