xchocoholic
Senior Member
- Messages
- 2,947
- Location
- Florida
I thought we could all relate to this article ... it's long but well worth it. I had to break this up so be sure to read the next post too ... x
http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
Lies, Damned Lies, and Medical Science
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctorsto a striking extentstill drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
By David H. Freedman
In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical schools teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if shed like to try to prove whether they were truehe seemed to be almost daring her. She accepted the challenge and, with the professors and other colleagues help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. It was hard to find a journal willing to publish it, but we did, recalls Tatsioni. I also discovered that I really liked research. Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.
Last spring, I sat in on one of the teams weekly meetings on the medical schools campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.
One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salantis study didnt address the fact that drug-company research wasnt measuring critically important hard outcomes for patients, such as survival versus death, and instead tended to measure softer outcomes, such as self-reported symptoms (my chest doesnt hurt as much today). Another pointed out that Salantis study ignored the fact that when drug-company data seemed to show patients health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.
Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all goodbut a single study cant prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grce: wasnt it possible, he asked, that drug companies were carefully selecting the topics of their studiesfor example, comparing their new drugs against those already known to be inferior to others on the marketso that they were ahead of the game even before the data juggling began? Maybe sometimes its the questions that are biased, not the answers, he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?
That question has been central to Ioannidiss career. Hes whats known as a meta-researcher, and hes become one of the worlds foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studiesconclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back painis misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the fields top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone elses work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to changeor even to publicly admitting that theres a problem.
The city of Ioannina is a big college town a short drive from the ruins of a 20,000-seat amphitheater and a Zeusian sanctuary built at the site of the Dodona oracle. The oracle was said to have issued pronouncements to priests through the rustling of a sacred oak tree. Today, a different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. I take all the researchers who visit me here, and almost every single one of them asks the tree the same question, Ioannidis tells me, as we contemplate the tree the day after the teams meeting. Will my research grant be approved? He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
He first stumbled on the sorts of problems plaguing the field, he explains, as a young physician-researcher in the early 1990s at Harvard. At the time, he was interested in diagnosing rare diseases, for which a lack of case data can leave doctors with little to go on other than intuition and rules of thumb. But he noticed that doctors seemed to proceed in much the same manner even when it came to cancer, heart disease, and other common ailments. Where were the hard data that would back up their treatment decisions? There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new evidence-based medicine movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health. He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine. Now hed have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field. I assumed that everything we physicians did was basically right, but now I was going to help verify it, he says. All wed have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.
It didnt turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science never minds are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesnt really help fend off Alzheimers disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.
But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. Randomized controlled trials, which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. I realized even our gold-standard research had a lot of problems, he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. The studies were biased, he says. Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there. Researchers headed into their studies wanting certain resultsand, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact its easy to manipulate results, even unintentionally or unconsciously. At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded, says Ioannidis. There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.
Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory thats making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly proves it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premisesafter all, simply re-proving someone elses results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.
In the late 1990s, Ioannidis set up a base at the University of Ioannina. He pulled together his team, which remains largely intact today, and started chipping away at the problem in a series of papers that pointed out specific ways certain studies were getting misleading results. Other meta-researchers were also starting to spotlight disturbingly high rates of error in the medical literature. But Ioannidis wanted to get the big picture across, and to do so with solid data, clear reasoning, and good statistical analysis. The project dragged on, until finally he retreated to the tiny island of Sikinos in the Aegean Sea, where he drew inspiration from the relatively primitive surroundings and the intellectual traditions they recalled. A pervasive theme of ancient Greek literature is that you need to pursue the truth, no matter what the truth might be, he says. In 2005, he unleashed two papers that challenged the foundations of medical research.
He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how interesting the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if youre attracted to ideas that have a good chance of being wrong, and if youre motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, youll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review processin which journals ask researchers to help decide which studies to publishto suppress opposing views. You can question some of the details of Johns calculations, but its hard to argue that the essential ideas arent absolutely correct, says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so whats the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science communitys two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
Lies, Damned Lies, and Medical Science
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctorsto a striking extentstill drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
By David H. Freedman
In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical schools teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if shed like to try to prove whether they were truehe seemed to be almost daring her. She accepted the challenge and, with the professors and other colleagues help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. It was hard to find a journal willing to publish it, but we did, recalls Tatsioni. I also discovered that I really liked research. Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.
Last spring, I sat in on one of the teams weekly meetings on the medical schools campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.
One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salantis study didnt address the fact that drug-company research wasnt measuring critically important hard outcomes for patients, such as survival versus death, and instead tended to measure softer outcomes, such as self-reported symptoms (my chest doesnt hurt as much today). Another pointed out that Salantis study ignored the fact that when drug-company data seemed to show patients health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.
Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all goodbut a single study cant prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grce: wasnt it possible, he asked, that drug companies were carefully selecting the topics of their studiesfor example, comparing their new drugs against those already known to be inferior to others on the marketso that they were ahead of the game even before the data juggling began? Maybe sometimes its the questions that are biased, not the answers, he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?
That question has been central to Ioannidiss career. Hes whats known as a meta-researcher, and hes become one of the worlds foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studiesconclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back painis misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the fields top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone elses work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to changeor even to publicly admitting that theres a problem.
The city of Ioannina is a big college town a short drive from the ruins of a 20,000-seat amphitheater and a Zeusian sanctuary built at the site of the Dodona oracle. The oracle was said to have issued pronouncements to priests through the rustling of a sacred oak tree. Today, a different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. I take all the researchers who visit me here, and almost every single one of them asks the tree the same question, Ioannidis tells me, as we contemplate the tree the day after the teams meeting. Will my research grant be approved? He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
He first stumbled on the sorts of problems plaguing the field, he explains, as a young physician-researcher in the early 1990s at Harvard. At the time, he was interested in diagnosing rare diseases, for which a lack of case data can leave doctors with little to go on other than intuition and rules of thumb. But he noticed that doctors seemed to proceed in much the same manner even when it came to cancer, heart disease, and other common ailments. Where were the hard data that would back up their treatment decisions? There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new evidence-based medicine movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health. He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine. Now hed have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field. I assumed that everything we physicians did was basically right, but now I was going to help verify it, he says. All wed have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.
It didnt turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science never minds are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesnt really help fend off Alzheimers disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.
But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. Randomized controlled trials, which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. I realized even our gold-standard research had a lot of problems, he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. The studies were biased, he says. Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there. Researchers headed into their studies wanting certain resultsand, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact its easy to manipulate results, even unintentionally or unconsciously. At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded, says Ioannidis. There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.
Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory thats making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly proves it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premisesafter all, simply re-proving someone elses results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.
In the late 1990s, Ioannidis set up a base at the University of Ioannina. He pulled together his team, which remains largely intact today, and started chipping away at the problem in a series of papers that pointed out specific ways certain studies were getting misleading results. Other meta-researchers were also starting to spotlight disturbingly high rates of error in the medical literature. But Ioannidis wanted to get the big picture across, and to do so with solid data, clear reasoning, and good statistical analysis. The project dragged on, until finally he retreated to the tiny island of Sikinos in the Aegean Sea, where he drew inspiration from the relatively primitive surroundings and the intellectual traditions they recalled. A pervasive theme of ancient Greek literature is that you need to pursue the truth, no matter what the truth might be, he says. In 2005, he unleashed two papers that challenged the foundations of medical research.
He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how interesting the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if youre attracted to ideas that have a good chance of being wrong, and if youre motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, youll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review processin which journals ask researchers to help decide which studies to publishto suppress opposing views. You can question some of the details of Johns calculations, but its hard to argue that the essential ideas arent absolutely correct, says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so whats the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science communitys two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.