• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Misleading P Values Increasing, not Decreasing in Medical Journals.

barbc56

Senior Member
Messages
3,657
Nothing showed up for this particular study but if I missed it, let me know.

This is out of Stanford University. The lead author is John Ioannidis. I was surprised the use of the p value has increased and not decreased. Hopefully, scientists such as Ioannidis will reverse this trend.
A review of p-values in the biomedical literature from 1990 to 2015 shows that these widely misunderstood statistics are being used increasingly, instead of better metrics of effect size or uncertainty
The widespread misuse of p-values — often creating the illusion of credible research — has become an embarrassment to several academic fields, including psychology and biomedicine, especially since Ioannidis began publishing critiques of the way modern research is conducted
My bold.
“The p-value does not tell you whether something is true. If you get a p-value of 0.01, it doesn’t mean you have a 1 percent chance of something not being true,” Ioannidis added. “A p-value of 0.01 could mean the result is 20 percent likely to be true, 80 percent likely to be true or 0.1 percent likely to be true — all with the same p-value. The p-value alone doesn’t tell you how true your result is.”

For an actual estimate of how likely a result is to be true or false, said Ioannidis, researchers should instead use false-discovery rates or Bayes factor calculations

https://med.stanford.edu/news/all-n...values-showing-up-more-often-in-journals.html

My knowledge of Bayesian statistics is very minimal. I did find the following but I'm not sure how helpful it is. If anyone has a better source, feel free to post it. Thanks.

https://en.m.wikipedia.org/wiki/Bayesian_probability
 

Mel9

Senior Member
Messages
995
Location
NSW Australia
Nothing showed up for this particular study but if I missed it, let me know.

This is out of Stanford University. The lead author is John Ioannidis. I was surprised the use of the p value has increased and not decreased. Hopefully, scientists such as Ioannidis will reverse this trend.

My bold.

https://med.stanford.edu/news/all-n...values-showing-up-more-often-in-journals.html

My knowledge of Bayesian statistics is very minimal. I did find the following but I'm not sure how helpful it is. If anyone has a better source, feel free to post it. Thanks.

https://en.m.wikipedia.org/wiki/Bayesian_probability


The p value does not tell you 'something is true'
Fair enough
But it is usually reliable when used correctly in Analysis of Variance for results from correctly replicated experiments. The lower the p value the better.
 

barbc56

Senior Member
Messages
3,657
@Mel9
Definitely. It does tell you something. and yes, the lower the p value the better. What Ioannidis is saying is that using the P value has shortcomings. When I was in graduate school the p value was sacrosanct. Recently, this has changed. Edit. I see you are saying what is written below. As you point out taking into consideration previous and replicated studies is important and that is the crux of the matter. Somehow I misread that. It's still helpful. information

The article below explains the limitations of only using the P value clearer and I realized I was familiar with some of the information. I don't know the actual statistical process used but that's fine as I just wanted to know the rationale.
Pandolfi and Carreras correctly point out (the P value), that this is committing a formal logical fallacy, the fallacy of the transposed conditional. To illustrate this they give an excellent example. The probability of having red spots in a patient with measles is not the same as the probability of measles in someone who has red spots.

In other words, the p-value tells us the probability of the data given the null hypothesis, but what we really want to know is the probability of the hypothesis given the data. We can’t reverse the logic of p-values simply because we want to.

The issue is a prori. How strong is the hypothesis in the first place. If it isn't strong to begin with, the study is not really saying anything even using a low p value. The example I've always liked which is certainly extreme, is that a study comparing the deaths from jumping out of an airplane with and without a parachute. The hypothesis is not a plausible question as we know how parachutes work. Therefore the conclusion which we already know does not add to any knowledge base.

This is my understanding so any errors are mine. Any feedback is welcome.
 
Last edited:

barbc56

Senior Member
Messages
3,657
@barbc56 Could you unpack that example a bit more? I am not connecting the dots.

Someone else may need to jump in here as I'm not a statistician.

It's similar to the GIGO (gargage in garbage out) principal.

This says it better.
GIGO (garbage in, garbage out) is a concept common to computer science and mathematics: the quality of output is determined by the quality of the input. So, for example, if a mathematical equation is improperly stated, the answer is unlikely to be correct. Similarly, if incorrect data is input to a program, the output is unlikely to be informative

In other words if you start with shit and then try to dress it up and call it something else by spraying it gold, it's still shit. :D

A good p value may be necessary but it isn't always sufficient enough to say whether you've proven your theory.
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
Nothing showed up for this particular study but if I missed it, let me know.

This is out of Stanford University. The lead author is John Ioannidis. I was surprised the use of the p value has increased and not decreased. Hopefully, scientists such as Ioannidis will reverse this trend.

My bold.

https://med.stanford.edu/news/all-n...values-showing-up-more-often-in-journals.html

My knowledge of Bayesian statistics is very minimal. I did find the following but I'm not sure how helpful it is. If anyone has a better source, feel free to post it. Thanks.

https://en.m.wikipedia.org/wiki/Bayesian_probability

People need to keep saying this but I doubt there will be any change until there is a total overhaul of the politics of science. At present nobody really cares about whether scientific papers are of value - just that they generate grants and citations.

The biggest problem I see is that the p value tells you nothing about whether or not the size of the difference makes it of any interest, or whether the consistency of the difference fits the hypothesis. If the hypothesis is that A is the cause of B then showing that 10% of patients with B have A in their tests refutes that because 90% don't. The fact that only 2% of controls had A and the difference was significant is of no interest.

What most of these analyses are supposed to do is find the likelihood that test and control results come from 'the same population' - which is usually the null hypothesis. A p of <0.05 means less than a one in 20 chance that the test samples are just part of the same population as controls. But in real life we know they are not - by definition they are a different population. And whatever way they were identified is more than likely going to bring along with it some telltale hint of a systematic difference from controls. So most of the time a p of <0.02 is likely to be showing up an difference due to some association with something irrelevant. It might be whether you did the test on Monday or Thursday and the buffer had gone off a bit.

But if you look at the actual individual results you usually have a much more powerful way of telling whether or not the difference means something because biological abnormalities lead to shifts in the way results are spread across a group. The human brain is very good at picking shifts in shapes - and quite good at ignoring the sort of shift of level without shift of shape you might get with the buffer going off. Which is why the much bigger problem these days is that almost everyone uses histograms or box plots instead of scattergrams that show the raw results. The sneaky way of hiding the irrelevance of your result has become fashionable.

Sigh.

But I suspect muddled thinking has been with us for ever - it is just manifest in a different way.