Severe ME Day of Understanding and Remembrance: Aug. 8, 2017
Determined to paper the Internet with articles about ME, Jody Smith brings some additional focus to Severe Myalgic Encephalomyelitis Day of Understanding and Remembrance on Aug. 8, 2017 ...
Discuss the article on the Forums.

Publication bias and the canonization of false facts

Discussion in 'Other Health News and Research' started by Kyla, Sep 21, 2016.

  1. Kyla

    Kyla ᴀɴɴɪᴇ ɢꜱᴀᴍᴩᴇʟ

    Messages:
    721
    Likes:
    4,221
    Canada
    https://arxiv.org/abs/1609.00494

    or .pdf file here:
    http://arxiv.org/pdf/1609.00494.pdf

     
    JohnCB, Valentijn, Jennifer J and 4 others like this.
  2. alex3619

    alex3619 Senior Member

    Messages:
    12,249
    Likes:
    33,556
    Logan, Queensland, Australia
    Anyone who looks at medical science, especially psychiatry, or economics, and does so over time, sees the accumulation of false facts in progress. Another word for this accumulation of facts might be dogma. Dogma is not necessarily unsound, but its hard to challenge, even with objective facts.
     
  3. In other words..."conventional wisdom"

    I regard blindly following "accepted wisdom" as safe and sound as using a buzzsaw to give yer nuts a Mohawk hair cut because that's what "everyone says you should do"...!
    :wide-eyed: :D
     
    Luther Blissett likes this.
  4. trishrhymes

    trishrhymes Senior Member

    Messages:
    2,024
    Likes:
    16,209
    UK
    Totally agree that publication bias is a real problem, especially in the 'sciences' like economics, psychology and sociology where human behaviour is involved, and even more so where the outcome measures are questionnaire based and can be influenced by the way the experiment is conducted.

    This is compounded in the PACE case by many other factors beyond publication bias as we've witnessed - selective bias by researchers only highlighting outcomes that suit their agenda (eg hiding the inconvenient step test results) fraudulent outcome measure weakening by researchers, exaggeration of effect in successive steps from abstract to press release to journalist to headline writer, production of many papers over a period of years from the same trial, each with a fanfare of publicity etc.

    But you know all this already, there's no need to write it down yet again. Guess I'm just letting off steam! Better than exploding.
     
    Luther Blissett and Woolie like this.
  5. Woolie

    Woolie Senior Member

    Messages:
    1,749
    Likes:
    13,011
    Thanks for posting, @ Kyla. I didn't understand a lot of the modelling, but I read the discussion and found it really interesting.

    Their main point was pretty well summarised in the abstract you posted: that if science wants to avoid making false claims, it needs to stop selectively publishing positive findings. More coverage need to be given of negative results.

    But there were a couple of other interesting points too. One was that the way we set up a research question matters a lot. Questions that ask whether there was or wasn't a difference between two groups or conditions are the most likely to promote false claims because differences will get published but studies finding no differences will not. But if a study is set up so that there are more than just these two possibilities, it might be a little less vulnerable (for example, which of two different treatments works best or whether there is no difference - here a positive result in either direction is equally publishable).

    Of course, the article doesn't consider things like researcher allegiance effects, confirmation biases and other biases that can also affect study outcomes. But they acknowledge a lot of these points.
     
  6. Woolie

    Woolie Senior Member

    Messages:
    1,749
    Likes:
    13,011
    Oh, crossed with you @trishrhymes! We've made some of the same points.
     
    Luther Blissett and trishrhymes like this.

See more popular forum discussions.

Share This Page