Discussion in 'Other Health News and Research' started by Kyla, Sep 21, 2016.
or .pdf file here:
Anyone who looks at medical science, especially psychiatry, or economics, and does so over time, sees the accumulation of false facts in progress. Another word for this accumulation of facts might be dogma. Dogma is not necessarily unsound, but its hard to challenge, even with objective facts.
In other words..."conventional wisdom"
I regard blindly following "accepted wisdom" as safe and sound as using a buzzsaw to give yer nuts a Mohawk hair cut because that's what "everyone says you should do"...!
Totally agree that publication bias is a real problem, especially in the 'sciences' like economics, psychology and sociology where human behaviour is involved, and even more so where the outcome measures are questionnaire based and can be influenced by the way the experiment is conducted.
This is compounded in the PACE case by many other factors beyond publication bias as we've witnessed - selective bias by researchers only highlighting outcomes that suit their agenda (eg hiding the inconvenient step test results) fraudulent outcome measure weakening by researchers, exaggeration of effect in successive steps from abstract to press release to journalist to headline writer, production of many papers over a period of years from the same trial, each with a fanfare of publicity etc.
But you know all this already, there's no need to write it down yet again. Guess I'm just letting off steam! Better than exploding.
Thanks for posting, @ Kyla. I didn't understand a lot of the modelling, but I read the discussion and found it really interesting.
Their main point was pretty well summarised in the abstract you posted: that if science wants to avoid making false claims, it needs to stop selectively publishing positive findings. More coverage need to be given of negative results.
But there were a couple of other interesting points too. One was that the way we set up a research question matters a lot. Questions that ask whether there was or wasn't a difference between two groups or conditions are the most likely to promote false claims because differences will get published but studies finding no differences will not. But if a study is set up so that there are more than just these two possibilities, it might be a little less vulnerable (for example, which of two different treatments works best or whether there is no difference - here a positive result in either direction is equally publishable).
Of course, the article doesn't consider things like researcher allegiance effects, confirmation biases and other biases that can also affect study outcomes. But they acknowledge a lot of these points.
Oh, crossed with you @trishrhymes! We've made some of the same points.
You can also try a Google Site Search
Separate names with a comma.