https://aeon.co/essays/it-s-time-for-science-to-abandon-the-term-statistically-significant
Related thread:
http://forums.phoenixrising.me/index.php?threads/scientific-method-statistical-errors-nature-2014-on-problems-with-p-values-and-ways-forward.39613/
The problem with p-values
Academic psychology and medical testing are both dogged by unreliability. The reason is clear: we got probability wrong
The aim of science is to establish facts, as accurately as possible. It is therefore crucially important to determine whether an observed phenomenon is real, or whether it’s the result of pure chance. If you declare that you’ve discovered something when in fact it’s just random, that’s called a false discovery or a false positive. And false positives are alarmingly common in some areas of medical science.
In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’, focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations. For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?
[See rest of article at above link.]
Related thread:
http://forums.phoenixrising.me/index.php?threads/scientific-method-statistical-errors-nature-2014-on-problems-with-p-values-and-ways-forward.39613/
Last edited: