Publication bias and the canonization of false facts

Kyla

ᴀɴɴɪᴇ ɢꜱᴀᴍᴩᴇʟ
Messages
721
Location
Canada
https://arxiv.org/abs/1609.00494

or .pdf file here:
http://arxiv.org/pdf/1609.00494.pdf

Publication bias and the canonization of false facts

Silas B. Nissen, Tali Magidson, Kevin Gross, Carl T. Bergstrom

(Submitted on 2 Sep 2016)

In the process of scientific inquiry, certain claims accumulate enough support to be established as facts. Unfortunately, not every claim accorded the status of fact turns out to be true. In this paper, we model the dynamic process by which claims are canonized as fact through repeated experimental confirmation. The community's confidence in a claim constitutes a Markov process: each successive published result shifts the degree of belief, until sufficient evidence accumulates to accept the claim as fact or to reject it as false. In our model, publication bias --- in which positive results are published preferentially over negative ones --- influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published. If negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims can be more readily distinguished. To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research.

.
 

trishrhymes

Senior Member
Messages
2,158
Totally agree that publication bias is a real problem, especially in the 'sciences' like economics, psychology and sociology where human behaviour is involved, and even more so where the outcome measures are questionnaire based and can be influenced by the way the experiment is conducted.

This is compounded in the PACE case by many other factors beyond publication bias as we've witnessed - selective bias by researchers only highlighting outcomes that suit their agenda (eg hiding the inconvenient step test results) fraudulent outcome measure weakening by researchers, exaggeration of effect in successive steps from abstract to press release to journalist to headline writer, production of many papers over a period of years from the same trial, each with a fanfare of publicity etc.

But you know all this already, there's no need to write it down yet again. Guess I'm just letting off steam! Better than exploding.
 

Woolie

Senior Member
Messages
3,263
Thanks for posting, @ Kyla. I didn't understand a lot of the modelling, but I read the discussion and found it really interesting.

Their main point was pretty well summarised in the abstract you posted: that if science wants to avoid making false claims, it needs to stop selectively publishing positive findings. More coverage need to be given of negative results.

But there were a couple of other interesting points too. One was that the way we set up a research question matters a lot. Questions that ask whether there was or wasn't a difference between two groups or conditions are the most likely to promote false claims because differences will get published but studies finding no differences will not. But if a study is set up so that there are more than just these two possibilities, it might be a little less vulnerable (for example, which of two different treatments works best or whether there is no difference - here a positive result in either direction is equally publishable).

Of course, the article doesn't consider things like researcher allegiance effects, confirmation biases and other biases that can also affect study outcomes. But they acknowledge a lot of these points.
 
Back