Severe ME Day of Understanding and Remembrance: Aug. 8, 2017
Determined to paper the Internet with articles about ME, Jody Smith brings some additional focus to Severe Myalgic Encephalomyelitis Day of Understanding and Remembrance on Aug. 8, 2017 ...
Discuss the article on the Forums.

"The P-Value Is a Hoax, But Here's How to Fix It" (introduces Bayesian statistics)

Discussion in 'Other Health News and Research' started by Dolphin, Jul 6, 2015.

  1. Dolphin

    Dolphin Senior Member

    Messages:
    10,671
    Likes:
    28,172
    ebethc, JaimeS, barbc56 and 5 others like this.
  2. alex3619

    alex3619 Senior Member

    Messages:
    12,482
    Likes:
    35,013
    Logan, Queensland, Australia
    He rightly points out the influence of bias and fraud. These can be a huge bias. A statistically significant study can be due solely to bias, and I think this happens a lot.

    The problem with ascertaining prior probability is its too highly subjective. Even with math to back you up.

    What it amounts to is that, no matter how you dress it up, any high p value is only a vague guide at best. However very very low p values, say 0.00001, might be an overestimate but in that range even a big error might not matter much. Some biomedical findings have results even lower than this. Biomedical research is not all about p values between 0.01 and 0.05.

    The 0.05 debate was going on when I was first learning university level science. Back then "real" science relied on 0.01 or lower. One view was the 0.05 was needed for messy theories in the real world when much could not be quantified or tested. It was for waffly pretend science, not the real sciences. In physics extremely low p values are required, and even then they are not always trusted. p values are only one indicator.
     
    JaimeS, barbc56 and SOC like this.
  3. Snow Leopard

    Snow Leopard Hibernating

    Messages:
    4,613
    Likes:
    12,435
    South Australia
    The use of Bayesian statistics doesn't really solve the problem (which is a lack of complete evidence), it just shifts the problem somewhere else. Attempts like this can of course be used to help guide our intuition but aren't the solution in themselves.
     
    natasa778 likes this.
  4. Dolphin

    Dolphin Senior Member

    Messages:
    10,671
    Likes:
    28,172
    The article also talks about the importance of replication.
     
    barbc56 likes this.
  5. Snow Leopard

    Snow Leopard Hibernating

    Messages:
    4,613
    Likes:
    12,435
    South Australia
    Replication can just skew the evidence base if the replication is just as biased as the original experiment...

    Bayesian inference is important, but even more important is a strong focus on questioning everything in an attempt to reduce bias.
     
  6. alex3619

    alex3619 Senior Member

    Messages:
    12,482
    Likes:
    35,013
    Logan, Queensland, Australia
    CBT/GET trials have been replicated many times. They typically have significant p values. I would not want to claim these studies are reliable. They are massively overinterpreted and I think in part his is because many docs only read the abstracts or published short commentaries.
     
    ebethc and SOC like this.
  7. Dolphin

    Dolphin Senior Member

    Messages:
    10,671
    Likes:
    28,172
    I agree that replication doesn't solve all problems for the reasons given.
    But I'd like to see more replication studies in the ME/CFS field. I don't have full confidence in many of the one-off studies (which often will have used some sort of post hoc analyses).
     
    Valentijn, barbc56, SOC and 1 other person like this.
  8. alex3619

    alex3619 Senior Member

    Messages:
    12,482
    Likes:
    35,013
    Logan, Queensland, Australia
    Replication is a piece in the mix. Our big problem though is not so much with replication but with funding. Poor replication follows poor funding. Poor cohort sizes, for underpowered studies, follows poor funding. Inadequate study design is often due to lack of funding. Sigh.
     
  9. Snow Leopard

    Snow Leopard Hibernating

    Messages:
    4,613
    Likes:
    12,435
    South Australia
    The discussion in the original article is interesting, with the example assuming a-priori numbers. The problem is that this is a catch-22 how do we really know what the true rate of positives is? Perhaps given publication bias, successful human intuition (based on prior evidence and unpublished pilot studies), of those studies that are published, true positive studies could very well be the norm, not the exception.

    One of the problems of studies in ME/CFS, might not be that the finding is due to chance, but that the observations themselves are trivial or do not imply what the authors are suggesting that they imply in the discussion.
     
  10. Dolphin

    Dolphin Senior Member

    Messages:
    10,671
    Likes:
    28,172
    Yes, I agree poor funding. But some people sometimes use "poor funding" solely to mean research bodies are to blame. To my mind, a lot of the funding problem has been that the ME/CFS community has not raised enough too.
     
    alex3619 likes this.
  11. alex3619

    alex3619 Senior Member

    Messages:
    12,482
    Likes:
    35,013
    Logan, Queensland, Australia
    This has been said before, but its not clear that there really is an ME/CFS community. There are small communities like here, which is only a drop in the global bucket. The problem here parallels the issues in ME/CFS advocacy, or why more patients don't get involved. There are good reasons for this, which have been discussed at length, but so far we have no solutions that reliably and repeatedly work.
     
  12. Snow Leopard

    Snow Leopard Hibernating

    Messages:
    4,613
    Likes:
    12,435
    South Australia
    I'd say this has definitely changed recently, with all the crowd funding campaigns...
     
  13. Dolphin

    Dolphin Senior Member

    Messages:
    10,671
    Likes:
    28,172
    Yes, agree. Though still many countries in which nothing or virtually nothing raised (apart from the odd person who donates to groups in other countries). And more potential there from on-the-ground fundraisers for research (like one sees quite a lot of in the UK) and, for example, if people left money in their wills which rarely seems to happen now.
     
  14. barbc56

    barbc56 Senior Member

    Messages:
    3,652
    Likes:
    5,006
    Here are some interesting discussions on PR. I don't think any of these threads need to be merged as reading the threads add to this discussion. Trying to merge them in a coherent manner would be a daunting task. Some of the threads have related links.

    This is a very important issue when analysing a study.

    There are probably more but these are a start and tbh, I have no energy to look further.

    http://forums.phoenixrising.me/inde...ng-gold-standards-of-research-validity.28187/http://forums.phoenixrising.me/inde...ng-gold-standards-of-research-validity.28187/

    http://forums.phoenixrising.me/inde...ces-neuroskeptic-nov-2013-but-timeless.36646/

    Barb
     
  15. Woolie

    Woolie Senior Member

    Messages:
    1,930
    Likes:
    14,556
    Yes, I also feel the whole prior probability thing is open to abuse.

    But Bayesian stats also offers something really simple and useful that might help us get around some of the problems of null hypothesis testing:

    Introducing: The Bayes Factor:

    http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1167&context=jps

    Here's how it works. You might be deciding which of two hypotheses is best supported by your data - maybe a null hypothesis and an alternative one (but it could be other possibilities too). The Bayes Factor (BF) expresses how likely one is supported by your data, relative to the other. Its a simple ratio. So for example, you might calculate how likely the alternative hypothesis is given your data, relative to the null hypothesis. The bigger the Bayes Factor, the more likely it is that the alternative hypothesis is true. The smaller the Bayes Factor, the less likely its true. A Bayes factor of 10,0000 is pretty persuasive. One of 2.00 is not. The article link above provides ways to calculate and interpret the values.

    Doesn't seem much different to what we do now, right? But you can do more - you're not limited to testing the null hypothesis. You can compare two alternative hypotheses. Based on Bayes Factors, you can even argue the null hypothesis is actually correct (whereas with conventional hypothesis testing, you can never accept the null hypothesis, you can only reject H1).

    This kind of approach might help get us out of the quagmire that null hypothesis testing has got us into. The pressure to find a difference - that is, reject H0 - or else your study is meaningless (and unpublishable).
     
    Dolphin and Snow Leopard like this.
  16. alex3619

    alex3619 Senior Member

    Messages:
    12,482
    Likes:
    35,013
    Logan, Queensland, Australia
    My will has one provision for funding for research for ME.
     
    Dolphin likes this.

See more popular forum discussions.

Share This Page