1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
August 8th - What is the one thing about suffering with severe ME that the world needs to know?
Andrew Gladman brings our coverage of the Understanding & Remembrance Day for Severe ME, airing the voice of patients ...
Discuss the article on the Forums.

Scientific method: rethinking 'gold standards' of research validity

Discussion in 'Other Health News and Research' started by natasa778, Feb 12, 2014.

  1. natasa778

    natasa778 Senior Member

    Messages:
    1,494
    Likes:
    1,363
    London UK
    Scientific method: Statistical errors
    P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.


    http://www.nature.com/news/scientific-method-statistical-errors-1.14700


     
    Last edited: Feb 12, 2014
    Bob, biophile, SOC and 2 others like this.
  2. alex3619

    alex3619 Senior Member

    Messages:
    7,723
    Likes:
    12,640
    Logan, Queensland, Australia
    Yes, P values are NOT reliable. In the hard sciences they usually like P values very very low, and distrust the analysis unless they are. In softer sciences they seem to think only somewhat low P values are good.

    These kinds of arguments are things I have been looking into. P values only reflect the possibility that a result is due to chance, and probability is a tricky thing. It turns out that uncommon events are very common as the system is stacked to favour all sorts of biases. So its possible that over 50% of results in psych are false, and the rest of medicine is not much better. Yet the medical profession treats these results as Gold.
     
    Bob and SOC like this.
  3. CBS

    CBS Senior Member

    Messages:
    1,379
    Likes:
    313
    Western US
    The term term "statistically significant results" is often employed. Statistics are more or less likely to be stable or reproducible (that's what a p value tells you) but the results alone are never significant. Only clinical value is significant or insignificant. The best studies address issues of clinically significant outcomes BEFORE setting a budget and as an essential guide to sample size and types of measures to be collected. It is disheartening to see how few researchers in all disciplines appreciate this critical distinction.
     
  4. barbc56

    barbc56 Senior Member

    Messages:
    1,578
    Likes:
    973
    Here's an interesting article. It also includes some very informative links.

    Basically, it comes down to prior plausability and not just looking at one study in isolation for a theory it to be valid. That's the short version.:)

    http://www.sciencebasedmedicine.org/5-out-of-4-americans-do-not-understand-statistics/

    Love this quote.

    :eek:
    .
     
    Firestormm and biophile like this.
  5. alex3619

    alex3619 Senior Member

    Messages:
    7,723
    Likes:
    12,640
    Logan, Queensland, Australia
    Yes @barbc56, that is just one of the articles I have used to come to my own conclusions.

    P values cannot always ensure something is correct. It still has to make sense, be determined with good methodology and analysis, etc etc. A biased study, a fraudulent study, poor methodology, even pure chance, can give statistically significant results. P value is only a heuristic, only suggestive.

    PS Some might find this amusing. I have read that article before, but my fluency in ME Typoese and not paying much attention to titles meant I autotranslated the title as 4 out of 5. This time I read it correctly, 5 out of 4.
     
    Last edited: Feb 12, 2014
    Valentijn and barbc56 like this.
  6. barbc56

    barbc56 Senior Member

    Messages:
    1,578
    Likes:
    973
    @alex3619

    LOL, I did the same thing. I had also read this article when it was first published and try to read these blogs as much as possible.:)
     
    alex3619 likes this.
  7. biophile

    biophile Places I'd rather be.

    Messages:
    1,410
    Likes:
    4,944
    Clinical significance is another can of worms. Take the PACE Trial for example, where a mere 2 points on a scale of 0-33 was regarded as a "moderate" clinically significant improvement in fatigue. How did that happen?

    PACE based their definitions of clinically significant improvement on the standard deviation of the baseline scores. 0.3SD for minimal clinical important difference and 0.5SD for clinically useful difference. SDs of the group scores were limited by the exclusion of severely and mildly affected, so the thresholds for clinically significant improvement were also low.

    In a worse case scenario to expose the problem with this method: if PACE had chosen patients who all scored the same, improving by a single increment would represent an unlimited effect size, which is absurd.

    PACE abandoned their generally more stringent definitions of clinically significant improvement and replaced almost all of them with post-hoc definitions. The original definitions were equally as arbitrary but generally based on previous trials.

    Who here would regard a mere 2 point improvement on the Chalder fatigue scale (Likert scoring) as "moderate"? Just another reason why more attention needs to be given to what is actually a clinically useful improvement to patients.
     
    Last edited: Feb 12, 2014
    Valentijn, Bob, Firestormm and 2 others like this.
  8. Firestormm

    Firestormm Guest

    Messages:
    5,824
    Likes:
    5,982
    Cornwall England

    Nice to have heard that at least Columbia and Lipkins team have a team of biostatisticians standing by from the start of the proposed Microbiome and Cytokine Study:
     
    Last edited: Feb 12, 2014
    alex3619 and Bob like this.
  9. Sean

    Sean Senior Member

    Messages:
    1,315
    Likes:
    2,383
    Come on down, 6 Sigma!

    (I would accept 4.5 ;) )

    And add me to list of those who regard clinical significance as a far more important and relevant standard to reach. Reaching statistical significance is one of those obviously-must-be-ticked boxes, but nothing more than that – it just gets the basic results into the main discussion, it doesn't make them 'real'.
     
    Valentijn likes this.

See more popular forum discussions.

Share This Page