There's already been plenty of discussion of this paper, but I thought I'd pull out some bits for myself, and may as well make them public. It's open access here, and the paper is not that long, so might be a more worthwhile read than my attempt at a summary (this post is probably 1/4 - 1/3 the length of the article): http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 He starts out with the problem of a lack of replication, and researchers to often putting their faith in a validity of a single positive study. Goes on to lay out some ways of discussing the likelihood of research findings being true: Then moves on to the problem of bias: Testing by Independent Teams: If lots of different teams are looking at an issue, and they and journals are interested in 'positive' results, then there is an increased chance of positive results being generated by chance, and then coming to effect people's view of reality. This is particularly a problem considering a general disinterest in replication: "Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation." Corollaries [my comments in italics]: Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. [This made me think of a lot of CFS research which seems to dredge up small associations, and then tie them to some psychosocial theory. I think that this could also be related to the 'many teams' problem - I wonder how many researchers looked at predictors/associations for CFS, found nothing, and then published nothing?] : "Modern epidemiology is increasingly obliged to target smaller effect sizes . Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors." Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. [Similar to above.] Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [18–20] or meta-analyses [21,22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) . Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test)  may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails . Simply abolishing selective publication would not make this problem go away. [Remind anyone of anything?] Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research , and typically they are inadequately and sparsely reported [26,27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable . [An important point that CFS patients are not allowed to make without being condemned for anti-psychiatry militancy.] Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. [Similar to the 'independent teams problem, but with the extra competition - this sounded better to me, as teams were more interested in promoting work which contradicted the claims of other groups]. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations . Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics . Most Research Findings Are False for Most Research Designs and for Most Fields He uses his models to argue that most research findings are false, I may be missing something, but this seemed pretty speculative to me, and rather bold for a paper that's pointing out the undue confidence researchers often have in their work. Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias I thought this was an interesting point: How Can We Improve the Situation? Appropriately targeted large scale studies: Competing teams, changing culture of science, registering and adhering to trial protocols as ways of combating bias: Conclusion: A few of the references there look potentially interesting. 18-20: Attempts to improve reporting/reduce bias in trials. I'm not sure if this one, or a similar one, has been discussed here before: 25. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457–2465. doi: 10.1001/jama.298.1.61. CrossRef PubMed/NCBI Google Scholar 26, 27 on COI. There were no specific references for the ideological COIs, but again, I think I remember papers on this already being discussed here. 35 De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. N Engl J Med 351: 1250–1251. doi: 10.7326/0003-4819-146-8-200704170-00011. Over and out.