• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

"The P-Value Is a Hoax, But Here's How to Fix It" (introduces Bayesian statistics)

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
He rightly points out the influence of bias and fraud. These can be a huge bias. A statistically significant study can be due solely to bias, and I think this happens a lot.

The problem with ascertaining prior probability is its too highly subjective. Even with math to back you up.

What it amounts to is that, no matter how you dress it up, any high p value is only a vague guide at best. However very very low p values, say 0.00001, might be an overestimate but in that range even a big error might not matter much. Some biomedical findings have results even lower than this. Biomedical research is not all about p values between 0.01 and 0.05.

The 0.05 debate was going on when I was first learning university level science. Back then "real" science relied on 0.01 or lower. One view was the 0.05 was needed for messy theories in the real world when much could not be quantified or tested. It was for waffly pretend science, not the real sciences. In physics extremely low p values are required, and even then they are not always trusted. p values are only one indicator.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
The use of Bayesian statistics doesn't really solve the problem (which is a lack of complete evidence), it just shifts the problem somewhere else. Attempts like this can of course be used to help guide our intuition but aren't the solution in themselves.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
The article also talks about the importance of replication.

Replication can just skew the evidence base if the replication is just as biased as the original experiment...

Bayesian inference is important, but even more important is a strong focus on questioning everything in an attempt to reduce bias.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
CBT/GET trials have been replicated many times. They typically have significant p values. I would not want to claim these studies are reliable. They are massively overinterpreted and I think in part his is because many docs only read the abstracts or published short commentaries.
 

Dolphin

Senior Member
Messages
17,567
I agree that replication doesn't solve all problems for the reasons given.
But I'd like to see more replication studies in the ME/CFS field. I don't have full confidence in many of the one-off studies (which often will have used some sort of post hoc analyses).
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Replication is a piece in the mix. Our big problem though is not so much with replication but with funding. Poor replication follows poor funding. Poor cohort sizes, for underpowered studies, follows poor funding. Inadequate study design is often due to lack of funding. Sigh.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
The discussion in the original article is interesting, with the example assuming a-priori numbers. The problem is that this is a catch-22 how do we really know what the true rate of positives is? Perhaps given publication bias, successful human intuition (based on prior evidence and unpublished pilot studies), of those studies that are published, true positive studies could very well be the norm, not the exception.

One of the problems of studies in ME/CFS, might not be that the finding is due to chance, but that the observations themselves are trivial or do not imply what the authors are suggesting that they imply in the discussion.
 

Dolphin

Senior Member
Messages
17,567
Replication is a piece in the mix. Our big problem though is not so much with replication but with funding. Poor replication follows poor funding. Poor cohort sizes, for underpowered studies, follows poor funding. Inadequate study design is often due to lack of funding. Sigh.
Yes, I agree poor funding. But some people sometimes use "poor funding" solely to mean research bodies are to blame. To my mind, a lot of the funding problem has been that the ME/CFS community has not raised enough too.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
This has been said before, but its not clear that there really is an ME/CFS community. There are small communities like here, which is only a drop in the global bucket. The problem here parallels the issues in ME/CFS advocacy, or why more patients don't get involved. There are good reasons for this, which have been discussed at length, but so far we have no solutions that reliably and repeatedly work.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Yes, I agree poor funding. But some people sometimes use "poor funding" solely to mean research bodies are to blame. To my mind, a lot of the funding problem has been that the ME/CFS community has not raised enough too.

I'd say this has definitely changed recently, with all the crowd funding campaigns...
 

Dolphin

Senior Member
Messages
17,567
Dolphin said:
Yes, I agree poor funding. But some people sometimes use "poor funding" solely to mean research bodies are to blame. To my mind, a lot of the funding problem has been that the ME/CFS community has not raised enough too.

I'd say this has definitely changed recently, with all the crowd funding campaigns...
Yes, agree. Though still many countries in which nothing or virtually nothing raised (apart from the odd person who donates to groups in other countries). And more potential there from on-the-ground fundraisers for research (like one sees quite a lot of in the UK) and, for example, if people left money in their wills which rarely seems to happen now.
 

barbc56

Senior Member
Messages
3,657
Here are some interesting discussions on PR. I don't think any of these threads need to be merged as reading the threads add to this discussion. Trying to merge them in a coherent manner would be a daunting task. Some of the threads have related links.

This is a very important issue when analysing a study.

There are probably more but these are a start and tbh, I have no energy to look further.

http://forums.phoenixrising.me/inde...ng-gold-standards-of-research-validity.28187/http://forums.phoenixrising.me/inde...ng-gold-standards-of-research-validity.28187/

http://forums.phoenixrising.me/inde...ces-neuroskeptic-nov-2013-but-timeless.36646/

Barb
 
Messages
3,263
Yes, I also feel the whole prior probability thing is open to abuse.

But Bayesian stats also offers something really simple and useful that might help us get around some of the problems of null hypothesis testing:

Introducing: The Bayes Factor:

http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1167&context=jps

Here's how it works. You might be deciding which of two hypotheses is best supported by your data - maybe a null hypothesis and an alternative one (but it could be other possibilities too). The Bayes Factor (BF) expresses how likely one is supported by your data, relative to the other. Its a simple ratio. So for example, you might calculate how likely the alternative hypothesis is given your data, relative to the null hypothesis. The bigger the Bayes Factor, the more likely it is that the alternative hypothesis is true. The smaller the Bayes Factor, the less likely its true. A Bayes factor of 10,0000 is pretty persuasive. One of 2.00 is not. The article link above provides ways to calculate and interpret the values.

Doesn't seem much different to what we do now, right? But you can do more - you're not limited to testing the null hypothesis. You can compare two alternative hypotheses. Based on Bayes Factors, you can even argue the null hypothesis is actually correct (whereas with conventional hypothesis testing, you can never accept the null hypothesis, you can only reject H1).

This kind of approach might help get us out of the quagmire that null hypothesis testing has got us into. The pressure to find a difference - that is, reject H0 - or else your study is meaningless (and unpublishable).