Hunting down the cause of ME/CFS & other challenging disorders - Lipkin in London
In a talk to patients in London on 3rd September, Dr. W. Ian Lipkin described the extraordinary lengths he and his team are prepared to go to in order to track down the source of an illness, with examples ranging from autism to the strange case of Kawasaki disease.
Discuss the article on the Forums.

[2005] Why Most Published Research Findings Are False John P. A. Ioannidis

Discussion in 'Other Health News and Research' started by Esther12, Sep 19, 2013.

  1. Esther12

    Esther12 Senior Member

    There's already been plenty of discussion of this paper, but I thought I'd pull out some bits for myself, and may as well make them public.

    It's open access here, and the paper is not that long, so might be a more worthwhile read than my attempt at a summary (this post is probably 1/4 - 1/3 the length of the article):

    He starts out with the problem of a lack of replication, and researchers to often putting their faith in a validity of a single positive study.

    Goes on to lay out some ways of discussing the likelihood of research findings being true:

    Then moves on to the problem of bias:

    Testing by Independent Teams: If lots of different teams are looking at an issue, and they and journals are interested in 'positive' results, then there is an increased chance of positive results being generated by chance, and then coming to effect people's view of reality. This is particularly a problem considering a general disinterest in replication: "Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation."

    Corollaries [my comments in italics]:

    Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.

    Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. [This made me think of a lot of CFS research which seems to dredge up small associations, and then tie them to some psychosocial theory. I think that this could also be related to the 'many teams' problem - I wonder how many researchers looked at predictors/associations for CFS, found nothing, and then published nothing?] : "Modern epidemiology is increasingly obliged to target smaller effect sizes [16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors."

    Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. [Similar to above.]

    Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [18–20] or meta-analyses [21,22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) [23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) [24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [25]. Simply abolishing selective publication would not make this problem go away. [Remind anyone of anything?]

    Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28]. [An important point that CFS patients are not allowed to make without being condemned for anti-psychiatry militancy.]

    Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. [Similar to the 'independent teams problem, but with the extra competition - this sounded better to me, as teams were more interested in promoting work which contradicted the claims of other groups]. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [29].

    Most Research Findings Are False for Most Research Designs and for Most Fields

    He uses his models to argue that most research findings are false, I may be missing something, but this seemed pretty speculative to me, and rather bold for a paper that's pointing out the undue confidence researchers often have in their work.

    Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias

    I thought this was an interesting point:

    How Can We Improve the Situation?

    Appropriately targeted large scale studies:

    Competing teams, changing culture of science, registering and adhering to trial protocols as ways of combating bias:


    A few of the references there look potentially interesting.

    18-20: Attempts to improve reporting/reduce bias in trials.

    I'm not sure if this one, or a similar one, has been discussed here before:

    25. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457–2465. doi: 10.1001/jama.298.1.61. CrossRef PubMed/NCBI Google Scholar

    26, 27 on COI. There were no specific references for the ideological COIs, but again, I think I remember papers on this already being discussed here.

    35 De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. N Engl J Med 351: 1250–1251. doi: 10.7326/0003-4819-146-8-200704170-00011.

    Over and out.
    Bob, Simon, Valentijn and 1 other person like this.
  2. Snow Leopard

    Snow Leopard Senior Member

    If we had to pin this all down to one problem it would be:

    The research community for ME and CFS is too small.
  3. Simon


    Monmouth, UK

    Just to recap some of the points from Ioannidis that you were highlighting as applicable to CFS research:
    1. lots of findings of small effects supporting BPS theories of CFS
    2. Questionnaires/scales are themselves 'fuzzy' measures when compared to objective outcomes such as death
    3. which gives us small effects on fuzzy measures for CFS
    4. data-dredging amongst countless questionnaire questions vs the desired outcome (fit with BPS view) could generate false postives. Even more so when the authors fail to correct for multiple corrections, as in this recent case highlighted by Dolphin
    5. so even some these modest findings may be false positives that appear only because of data mining and/or failure to correct for multiple comparisons
    I am genuinely surprised that when years of searching has found no strong evidence to support the BPS view of CFS, the researchers haven't reavaluated their theories. Their models assume that whatever triggers CFS it is perpetuated by faulty patient beliefs and behaviours resulting in major disability and dramatic loss of quality of life. Surely such a situation would throw up many detectable strong effects - if it were true.
    peggy-sue, Sean, Esther12 and 2 others like this.
  4. Esther12

    Esther12 Senior Member

    Thanks for the summary of my summary Simon.

    They can just keep shrinking the claimed effect size, while continuing to claim that these are important factors, and that only militant anti-psychiatry patients would complain about having the psychosocial aspects of their lives medicalised anyway. From their point of view, I don't see any advantage to pulling out. Do you really think Chalder would benefit from admitting how ineffective her treatments, or flawed her theories, were?

    I don't know. If more money if spent on CFS research, without there also been a real cultural change, and commitment to releasing more data, trying to measure more objective outcomes, and outcomes that patients care about, less of a focus on finding positive results, etc, etc... then I think that the problems we've already seen could just be expanded and worsened.

    There's a real lack of solid starting points, and that means that those researchers who will be the most 'successful' are those working in area most susceptible to the sort of problems that allow meaningless results to be registered as positive findings. More researchers would increase the chance of a genuine break-though, which could actually lead to real progress, but it would also be likely to lead to a massive increase in dross.

    This bit interested me, not least because there's a constant thought in the back of my mind of 'this biopsychosocial stuff can't be pure quackery... surely someone would have noticed - what am I missing?':

    Sean likes this.
  5. Simon


    Monmouth, UK
    I was hoping you might summarise my summary of your summary: 120 character tweet?

    I'm not sure about that. Their model of perpetuation makes strong claims, implying strong effects; if they want to argue that the effect sizes are genuinely modest (as opposed to merely having failed to find big effects) then they have to raadically modify their model. ie to the point that BPS factors alone (inc any secondary biological factors) cannot explain CFS. They have yet to do so. I think the idea that BPS factors could contribute to but not cause perpetuation would be a great deal less controversial - and would be more in line with the views about other chronic illnesses. The issue then would be illness management, not cure.

    You may have a point.

    I wonder if it is different for clinician reserchers. The optimist in my likes to think that most researchers would eventually get tired of heading down a blind alley(none of us would let go of a pet theory easily).
    Esther12 likes this.
  6. user9876

    user9876 Senior Member

    I thought this blog was quite a good analysis of how papers mislead.

    Its talking about psychological treatments for breast cancer and claims made in papers that they prolong life. It turns out that the data didn't really support this. The results were cherry picked after the end of the trial and bad statistics as well where the mean value quoted overstates results due to a single point.
    Valentijn, Bob and Esther12 like this.
  7. Valentijn

    Valentijn Activity Level: 3

    Amersfoort, Netherlands
    The problem is that they are not researchers who are looking for answers. They are CBT/GET practitioners designing trials and presenting data in the manner which is most persuasive in convincing others (who don't look closely) that CBT/GET is the only effective treatment.
  8. Esther12

    Esther12 Senior Member

    Look at how they're redefining 'recovery' outside of CFS too. There's now lots of talk about the most important health problem being people's 'resilience' to health problems, and willingness to not see themselves as ill. (Most important to who, I wonder?)

    The PACE recovery definition may be particularly absurd, but it also fits in to a pattern of redefining the 'sickness role', and what good health means.

    When words have no real meaning, people can get away with saying all sorts!
    Valentijn likes this.

See more popular forum discussions.

Share This Page