1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
August 8th - What is the one thing about suffering with severe ME that the world needs to know?
Andrew Gladman brings our coverage of the Understanding & Remembrance Day for Severe ME, airing the voice of patients ...
Discuss the article on the Forums.

"The academic system encourages the publication of a lot of junk research."

Discussion in 'Other Health News and Research' started by Waverunner, Apr 9, 2012.

  1. Waverunner

    Waverunner Senior Member

    Messages:
    992
    Likes:
    834
    A recent commentary in Nature claimed that a lot of cancer research is junk. The results often cannot be replicated but still get published.

    http://reason.com/archives/2012/04/03/can-most-cancer-research-be-trusted#commentcontainer

    Addressing the problem of "academic risk" in biomedical research

    Ronald Bailey | April 3, 2012

    When a cancer study is published in a prestigious peer-reviewed journal, the implcation is the findings are robust, replicable, and point the way toward eventual treatments. Consequently, researchers scour their colleagues' work for clues about promising avenues to explore. Doctors pore over the pages, dreaming of new therapies coming down the pike. Which makes a new finding that nine out of 10 preclinical peer-reviewed cancer research studies cannot be replicated all the more shocking and discouraging.

    Last week, the scientific journal Nature published a disturbing commentary claiming that in the area of preclinical researchwhich involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in peopleindependent researchers doing the same experiment cannot get the same result as reported in the scientific literature.

    The commentary was written by former vice president for oncology research at the pharmaceutical company Amgen Glenn Begley and M.D. Anderson Cancer Center researcher Lee Ellis. They explain that researchers at Amgen tried to confirm academic research findings from published scientific studies in search of new targets for cancer therapeutics. Over 10 years, Amgen researchers could reproduce the results from only six out of 53 landmark papers. Begley and Ellis call this a shocking result." It is.

    The two note that they are not alone in finding academic biomedical research to be sketchy. Three researchers at Bayer Healthcare published an article [PDF] in the September 2011 Nature Reviews: Drug Discovery in which they assert validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. How bad was the Bayer researchers disillusionment with academic lab results? They report that of 67 projects analyzed only in 20 to 25 percent were the relevant published data completely in line with our in-house findings.

    Perhaps results from high-end journals have a better record? Not so, say the Bayer scientists. Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility. Indeed, our analysis revealed that the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target or the number of independent groups that authored the publications.

    So what is going wrong? Neither study suggests that the main problem is fraud. Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results. For example, Begley met with the lead scientist of one promising study to discuss the problems Amgen was having in reproducing the studys results.

    "We went through the paper line by line, figure by figure," said Begley to Reuters. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning.
    " Sadly, Begley explains in an email that they cannot reveal which studies are flawed due to the insistence by many researchers on confidentiality agreements before they would work with the Amgen scientists. So much for transparency.

    In 2005, epidemiologist John Ioannidis explained, Why Most Published Research Findings Are False, in the online journal PLoS Medicine. In that study Ioannidis noted that reported studies are less likely to be true when they are small, the postulated effect is weak, research designs and endpoints are flexible, financial and nonfinancial conflicts of interest are present, and competition in the field is fierce.

    The academic system encourages the publication of a lot of junk research, Begley and Ellis agree. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication, they note. And journal editors and grant reviewers make it worse by pushing researchers to produce a scientific finding that is simple, clear and completea perfect story. This pressure induces some researchers massage data to fit an underlying hypothesis or even suppress negative data that contradicts the favored hypothesis. In addition, peer review is broken. If an article is rejected by one journal, very often researchers will ignore the comments of reviewers, slap on another cover letter and submit to another journal. The publication process becomes a lottery; not a way to filter out misinformation.

    Given all the brouhaha [PDF] over how financial interests are allegedly distorting pharmaceutical company research, its more than a bit ironic that it is pharmaceutical company scientists who are calling academic researchers to account. Back in 2004, an American Medical Association report [PDF] on conflicts of interest noted that reviews comparing academic and industry research found, "Most authors have concluded that industry-funded studies published in peer-reviewed journals are of equivalent or higher quality than non-industry funded clinical trials. In an email, Begley, who was an academic researcher for 25 years before joining Amgen, agrees, My impression, I don't have hard data, is that studies from large companies is of higher quality. Those companies are going to lay down millions of dollars if a study is positive. And they don't want to terminate a program prematurely so a negative study is more likely to be real.

    These results strongly suggest that the current biomedical research and publication system is wasting scads of money and talent. What can be done to improve the situation? Perhaps, as some Nature online commenters have bitterly suggested, researchers should submit their work directly to Bayer and Amgen for peer review? In fact, some venture companies are hedging against academic risk when it comes to investing in biomedical startups by hiring contract research organizations to vet academic science.

    Barring the advent of drug company peer review, more transparency will help. Begley and Ellis recommend that preclinical researchers be required to present all findings regardless of the outcome; no more picking the best story. Funders and reviewers must recognize that negative data can be just as informative as positive. Universities and grant makers should recognize and reward great teaching and mentoring and rely less on publication as the chief promotion benchmark. In addition, funders should focus more attention on developing standardized tools and protocols for use in research rather than just hankering after the next big breakthrough.

    Researchers, funders, and editors should also consider the more radical proposals offered by Ioannides and colleagues including upfront registries of studies in which their hypotheses and protocols are outlined in public. That way if researchers decide later to fiddle with their protocols and results at least others in the field can find out about it. Another option would be to make peer-review comments available in public even for rejected studies. This would encourage researchers who want to resubmit to other journals to answer and fix problems identified by reviewers. The most intriguing idea is to have the drafts of papers deposited in to common public website where journal editors can scan through them, invite peer reviews, and make offers of publication.

    The chief argument for government funding of academic biomedical research is that it will produce the basic science upon which new therapies can be developed and commercialized by pharmaceutical companies. This ambition is reflected in the slogan on the website of National Institutes of Health (NIH), which reads the nations medical research agencysupporting scientific studies that turn discovery into health. These new studies give the public and policymakers cause to wonder just how much of the NIHs $30 billion annual budget ends up producing the moral equivalent of junk science?
  2. Esther12

    Esther12 Senior Member

    Messages:
    5,157
    Likes:
    5,082
    I was talking to a Prof about this recently, and they were explaining how stupid the way in which university funding is determined now. A desire for 'impact' can mean that dishonest and attention grabbing press releases are a good thing.

    It's tragic how tame and obviously beneficial most of their 'radical proposals' are.
  3. SilverbladeTE

    SilverbladeTE Senior Member

    Messages:
    2,076
    Likes:
    1,580
    Somewhere near Glasgow, Scotland
    Medical research should be completely seperated from financial gain, because it causes terrible corruption and a massive threat to personal, nation and even species safety.

    if you have ot keep big business in medicine (which I am totally against), you could set up a funding system, where by business donates to a fund (see after), the fund is distributed by independant body to academic researchers (heck you may even need to do blind random to ensure complete seperation and thus no corruption)
    When research is finalized, reviewed and checked, then the companies are offered the rights to use the research and they must bid for them, money goes to the fund
    however even then, you have to design system where the scumbags won't secretly agree to modify and set bids to reduce the money paid out (as happens in a lot of "low bid contract" systems in public sector, blech)

    only way I can see an honest seperation, as long as big business funds or owns the research directly, corruption and stark criminality are inevitable, see the Vioxx scandal
  4. alex3619

    alex3619 Senior Member

    Messages:
    6,893
    Likes:
    10,533
    Logan, Queensland, Australia
    Hi SilverbladeTE, unfortunately its not just money biasing the outcome. Prestige and promotion are just as bad. In addition there are things fundamentally wrong with a lot of research, the way studies are conducted is not best science.

    I predicted the Vioxx issue about a year before it was announced, based on basic biochemistry. It was obvious that the stronger the anti-inflammatory targeting the COX molecule, the more it would shut down essential hormone synthesis. Its called essential for a reason. I was not sure what the impact would be, but I knew it was potentially lethal. This was obvious from the beginning. It however had nothing to do directly with the target molecule, cyclo-oxygenase. It was about secondary impacts of targeting a key hormone producing pathway. For some reason that was considered someone elses problem. Systems biology, and an insistence of a systems impact review in biomedical research, is required to begin to address this problem.

    This problem of presenting most favourable views is one we know well in one non-drug intervention: psychobabble. Its not just drug companies and academic researchers doing this. At least drug companies have to go through a specified series of studies before they can release a drug. Psychobabblers don't.

    Bye, Alex
    Waverunner likes this.
  5. Waverunner

    Waverunner Senior Member

    Messages:
    992
    Likes:
    834
    Great post, thank you, Alex. In your opinion, how could a solution look like? I'm never in favor of government but in this case I thought about randomly assigning universities to unpublished study results in order for them to replicate the study. The contractee (company/scientists who conducted the first study) would have to pay for the replication. By this it is made sure that positive as well as negative results get published because financial incentives have no influence anymore.
  6. alex3619

    alex3619 Senior Member

    Messages:
    6,893
    Likes:
    10,533
    Logan, Queensland, Australia
    Hi Waverunner, I don't think the improvement will come from within psychiatry or psychology, not unless non-vested professionals stand up and demand it ... and lets face it, they don't have a track record of doing that, its sometimes considered an offence - bringing the profession into disrepute. I am here presuming you mean what would a good CBT/GET study require? If you meant what would a good pharmaceutical solution require that is a separate problem entirely and I am still thinking about it. Many of the issues for pharmaceutical intervention for CFS and ME are addressed in my general response here.

    What I think is required is not a consensus definition of ME or CFS, but an international concensus definition of minimum requirements for a valid scientific study into ME or CFS. This would require, at the very least, concensus agreement with from experts in fields with objective evidence - neurology, immunology, endocrinology, exercise physiology etc. This would include, at the very least, the best potential subgrouping protocols currently available. One I consider mandatory would be the test-retest exercise physiology study (e.g. as at Pacific Labs) for mild and moderate patients but probably not severe patients. Very severe patients would not even be able to stand up for testing. A broad range of patient severity should be mandatory for any gold standard study - no more studies of super-mild patients who can turn up easily to remote locations.

    A second issue would be to investigate Komaroffs spectral coherence testing to rule out depression.

    As biochemical markers become more understood a specific set of those should also be chosen.

    Any study, even psychiatric ones, would then be required to subgroup patients into every known subgroup, and either analyze or publish it themselves, or provide the raw data for reworking by others. For every marker, if they are known or suspected to change over time from any intevention would be required to report on all relevant markers, which should be pre-specified. Dropping actometers or any other technique part way through a study is not a valid methodology.

    As for outcomes, only objective functional capacity measures before and after the study are relevant as indicators of performance. At the moment the simplest way of doing that is actometers I think. Something better might come along in time though, especially if costs and facilities for exercise capacity studies can be improved. Surveys that produce statistically significant results due to large numbers, but the actual change is quite tiny, do not cut it. Nor do improvements in exercise capacity which do not clearly demonstrate that improvements are not just about pushing for one test but extend to the entire range of activities in a patient's life.

    Another requirement would be mandatory monitoring for known side effects after a patient drops out of the main study (allowing only that patients can opt out of this, the researchers can't) - patients don't always improve just because exercise scores go up, and definitely not because they tick a form on a box after being coached to do so.

    These changes will not come from psychiatry nor government, at least not initially. Its up to our researchers to initiate this, then bring governments on board and make this an international standard. Any study that is then produced, including psychiatric studies, that do not meet the agreed international study can be then severely criticized.

    There have been serious suggestions that experimental design should involve people from outside the field who have a capacity for objective and independent assessment. For an exercise based study, for example, and this includes GET, exercise physiologists who know the ME and CFS research on exercise physiology would be required in the study. Psychiatrists who think they are exercise experts are part of the problem.

    These are just some initial thoughts. I am actually trying to analyze this problem from some other angles as well. I might blog some of this over time, but much of it will be for a book I am writing on this subject. This problem goes beyond CBT/GET (which I am now calling something quite different, I may blog about this later) and includes the Biopsychosocial movement and the push toward Evidence Based Medicine. You don't fix a broken or underperforming system by pushing another broken or underperforming system as a replacement.

    My thoughts toward how to fix EBM is that a radical overhaul is required. This will be resoundingly rejected by most EBM adherents, but there are fundamental things wrong with it. I intend to write a blog about this eventually. EBM is fixable but current main players need to lose their power and share it with rival international organizations. Too heavily centralized and biased power is not a good thing.

    Bye, Alex
  7. Waverunner

    Waverunner Senior Member

    Messages:
    992
    Likes:
    834
    Yes, Alex, moreover I thought about how to improve the whole publishing process. It's not objective enough in my eyes.
    Btw. I'm looking forward to your book.
  8. Esther12

    Esther12 Senior Member

    Messages:
    5,157
    Likes:
    5,082
    One little point.

    So both studies suggest that the main problem is fraud then?

    Some researchers seem to act is if the immoral and fraudulent acts of researchers aren't REAL fraud, not like the evil business-people do. It's just misrepresenting their own work so that they get more funding and fame - you know: good, happy fraud.
    Merry and Wildcat like this.

See more popular forum discussions.

Share This Page