• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

It Ain't Necessarily So: Why Much of the Medical Literature Is Wrong

Ema

Senior Member
Messages
4,729
Location
Midwest USA
Visit Medscape to see the entire article.

It Ain't Necessarily So: Why Much of the Medical Literature Is Wrong

Christopher Labos, MD CM, MSc, FRCPC

September 09, 2014
In 1897, eight-year-old Virginia O'Hanlon wrote to the New York Sun to ask, "Is there a Santa Claus?"[1] Virginia's father, Dr. Phillip O'Hanlon, suggested that course of action because "if you see it in the Sun, it's so." Today many clinicians and health professionals may share the same faith in the printed word and assume that if it says it in the New England Journal of Medicine(NEJM) or JAMA or The Lancet, then it's so.

Putting the existence of Santa Claus aside, John Ioannidis[2] and others have argued that much of the medical literature is prone to bias and is, in fact, wrong.

Given a statistical association between X and Y, most people make the assumption that X caused Y. However, we can easily come up with 5 other scenarios to explain the same situation.

1. Reverse Causality
Given the association between X and Y, it is actually equally likely that Y caused X as it is that X caused Y. In most cases, it is obvious which variable is the cause and which is the effect. If a study showed a statistical association between smoking and coronary heart disease (CHD), it would be clear that smoking causes CHD and not that CHD makes people smoke. Because smoking preceded CHD, reverse causality in this case is impossible. But the situation is not always that clear-cut. Consider astudy published in the NEJM that showed an association between diabetes and pancreatic cancer.[3] The casual reader might conclude that diabetes causes pancreatic cancer. However, further analysis showed that much of the diabetes was of recent onset. The pancreatic cancer preceded the diabetes, and the cancer subsequently destroyed the insulin-producing islet cells of the pancreas. Therefore, this was not a case of diabetes causing pancreatic cancer but of pancreatic cancer causing the diabetes.

Mistaking what came first in the order of causation is a form of protopathic bias.[4] There are numerous examples in the literature. For example, an assumed association between breast feeding and stunted growth, [5] actually reflected the fact that sicker infants were preferentially breastfed for longer periods. Thus, stunted growth led to more breastfeeding, not the other way around. Similarly, an apparent association between oral estrogens and endometrial cancer was not quite what it seemed.[6] Oral estrogens may be prescribed for uterine bleeding, and the bleeding may be caused by an undiagnosed cancer. Therefore, when the cancer is ultimately diagnosed down the road, it will seem as if the estrogens came before the cancer, when in fact it was the cancer (and the bleeding) that led to the prescription of estrogens. Clearly, sometimes it is difficult to disentangle which factor is the cause and which is the effect.

2. The Play of Chance and the DICE Miracle
Whenever a study finds an association between 2 variables, X and Y, there is always the possibility that the association was simply the result of random chance.

Most people assess whether a finding is due to chance by checking if the P value is less than .05. There are many reasons why this the wrong way to approach the problem, and an excellent review by Steven Goodman[7] about the popular misconceptions surrounding the P value is a must-read for any consumer of medical literature.

To illustrate the point, consider the ISIS-2 trial,[8] which showed reduced mortality in patients given aspirin after myocardial infarction. However, subgroup analyses identified some patients who did not benefit: those born under the astrological signs of Gemini and Libra; patients born under other zodiac signs derived a clear benefit with a P value < .00001. Unless we are prepared to re-examine the validity of astrology, we would have to admit that this was a spurious finding due solely to chance. Similarly,Counsell et al. performed an elegant experiment using 3 different colored dice to simulate the outcomes of theoretical clinical trials and subsequent meta-analysis.[9] performed an elegant experiment using 3 different colored dice to simulate the outcomes of theoretical clinical trials and subsequent meta-analysis. Students were asked to roll pairs of dice, with a 6 counting as patient death and any other number correlating to survival. The students were told that one dice may be more "effective" or less effective (ie, generate more sixes or study deaths). Sure enough, no effect was seen for red dice, but a subgroup of white and green dice showed a 39% risk reduction (P = .02). Some students even reported that their dice were "loaded." This finding was very surprising because Counsell had played a trick on his students and used only ordinary dice. Any difference seen for white and green dice was a completely random result.

Cont...
 
Last edited:

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Thanks, Ema

Here's a link to the article:
It Ain't Necessarily So] Why Is So Much of the Medical Literature Wrong?

Here are the remaining three points from the article, for interest

3. Bias: Coffee, Cellphones, and Chocolate
Bias occurs when there is no real association between X and Y, but one is manufactured because of the way we conducted our study. Delgado-Rodriguez and Llorca[4] identified 74 types of bias in their glossary of the most common biases, which can be broadly categorized into 2 main types: selection bias and information bias....

4. Confounding
Confounding, unlike bias, occurs when there really is an association between X and Y, but the magnitude of that association is influenced by a third variable. .. For example, diabetes confounds the relationship between renal failure and heart disease because it can lead to both conditions...

5. Exaggerated Risk
Finally, let us make the unlikely assumption that we have a trial where nothing went wrong, and we are free of all of the problems discussed above. The greatest danger lies in our misinterpretation of the findings...