Indeed, very interesting Sean.
I agree with Rich that the "pushing and shoving" that we see is something that normally the general public isn't aware of, and that it's quite a shock to get closer to the scientific process and discover how messy and political it can be. And I think a lot of people fall into the trap of thinking this is just about ME somehow, and doesn't apply in other areas of science as well - which leads to all kinds of paranoia and confusion.
Some quotes in this article that I really liked:
The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier.
Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
(Regardless of any corporate/financial incentives in relation to psychological research into ME, the point stands: these sort of distortions are widespread and natural, even in the absence of malice or conflict of interest).
Even better...
I’ve learned the hard way to be exceedingly careful,” Schooler says. “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.
Absolutely right, spot on as to how it should work - it's always so exciting to read somebody expounding these sort of ideas in print: it should be a requirement for publication that the details of the experiment were published
publicly before the experiment takes place, and that the results
must always then be published in full when the experiment is completed or aborted. Studies that don't satisfy these criteria should be inadmissible as scientific evidence. That way, experiments can't be performed 'speculatively' and their results published only if they suit the interests of the funders - as is common practice today.
Modern Science is in crisis. The findings of scientists don't command widespread respect and trust in the way they used to, because people are increasingly aware that the scientific process has become corrupted by corporate interests undermining academic freedom, a phenomenon which is probably only about 20-30 years old, in its modern form at least. Huge numbers of people who are rationalists and firm believers in the scientific method, have lost faith in the practical reality of modern scientific practice, and many good researchers have left the world of research in disgust.
At the same time, this corruption of academia is approaching this crisis point at a time when the internet is reaching a stage of maturity that offers unprecedented opportunities to revolutionise the practical application of the scientific method. The publication and peer review process are capable of radical reform, new paradigms can now be imagined - and all that is required is some recognition of the scale and seriousness of the problem, and the vision to imagine that science
could take place in a better way.
Openness, transparency, and the democratisation of access to research findings, are the only ways I can see to resolve this crisis. The nature of my belief in science itself has never changed, but these simple principles are certainly the only things that can restore my own faith in the scientific process as it exists in practice today.