Review: 'Through the Shadowlands’ describes Julie Rehmeyer's ME/CFS Odyssey
I should note at the outset that this review is based on an audio version of the galleys and the epilogue from the finished work. Julie Rehmeyer sent me the final version as a PDF, but for some reason my text to voice software (Kurzweil) had issues with it. I understand that it is...
Discuss the article on the Forums.

Misleading P Values Increasing, not Decreasing in Medical Journals.

Discussion in 'Other Health News and Research' started by barbc56, May 17, 2016.

  1. barbc56

    barbc56 Senior Member

    Nothing showed up for this particular study but if I missed it, let me know.

    This is out of Stanford University. The lead author is John Ioannidis. I was surprised the use of the p value has increased and not decreased. Hopefully, scientists such as Ioannidis will reverse this trend.
    My bold.

    My knowledge of Bayesian statistics is very minimal. I did find the following but I'm not sure how helpful it is. If anyone has a better source, feel free to post it. Thanks.
    Webdog, aaron_c, Snow Leopard and 2 others like this.
  2. Mel9

    Mel9 Senior Member

    NSW Australia

    The p value does not tell you 'something is true'
    Fair enough
    But it is usually reliable when used correctly in Analysis of Variance for results from correctly replicated experiments. The lower the p value the better.
  3. barbc56

    barbc56 Senior Member

    Definitely. It does tell you something. and yes, the lower the p value the better. What Ioannidis is saying is that using the P value has shortcomings. When I was in graduate school the p value was sacrosanct. Recently, this has changed. Edit. I see you are saying what is written below. As you point out taking into consideration previous and replicated studies is important and that is the crux of the matter. Somehow I misread that. It's still helpful. information

    The article below explains the limitations of only using the P value clearer and I realized I was familiar with some of the information. I don't know the actual statistical process used but that's fine as I just wanted to know the rationale.
    The issue is a prori. How strong is the hypothesis in the first place. If it isn't strong to begin with, the study is not really saying anything even using a low p value. The example I've always liked which is certainly extreme, is that a study comparing the deaths from jumping out of an airplane with and without a parachute. The hypothesis is not a plausible question as we know how parachutes work. Therefore the conclusion which we already know does not add to any knowledge base.

    This is my understanding so any errors are mine. Any feedback is welcome.
    Last edited: May 17, 2016
    Mel9, Valentijn and TiredSam like this.
  4. aaron_c

    aaron_c Senior Member

    @barbc56 Could you unpack that example a bit more? I am not connecting the dots.
  5. barbc56

    barbc56 Senior Member

    Someone else may need to jump in here as I'm not a statistician.

    It's similar to the GIGO (gargage in garbage out) principal.

    This says it better.
    In other words if you start with shit and then try to dress it up and call it something else by spraying it gold, it's still shit. :D

    A good p value may be necessary but it isn't always sufficient enough to say whether you've proven your theory.
    Last edited: May 19, 2016
    *GG* likes this.
  6. Jonathan Edwards

    Jonathan Edwards "Gibberish"

    People need to keep saying this but I doubt there will be any change until there is a total overhaul of the politics of science. At present nobody really cares about whether scientific papers are of value - just that they generate grants and citations.

    The biggest problem I see is that the p value tells you nothing about whether or not the size of the difference makes it of any interest, or whether the consistency of the difference fits the hypothesis. If the hypothesis is that A is the cause of B then showing that 10% of patients with B have A in their tests refutes that because 90% don't. The fact that only 2% of controls had A and the difference was significant is of no interest.

    What most of these analyses are supposed to do is find the likelihood that test and control results come from 'the same population' - which is usually the null hypothesis. A p of <0.05 means less than a one in 20 chance that the test samples are just part of the same population as controls. But in real life we know they are not - by definition they are a different population. And whatever way they were identified is more than likely going to bring along with it some telltale hint of a systematic difference from controls. So most of the time a p of <0.02 is likely to be showing up an difference due to some association with something irrelevant. It might be whether you did the test on Monday or Thursday and the buffer had gone off a bit.

    But if you look at the actual individual results you usually have a much more powerful way of telling whether or not the difference means something because biological abnormalities lead to shifts in the way results are spread across a group. The human brain is very good at picking shifts in shapes - and quite good at ignoring the sort of shift of level without shift of shape you might get with the buffer going off. Which is why the much bigger problem these days is that almost everyone uses histograms or box plots instead of scattergrams that show the raw results. The sneaky way of hiding the irrelevance of your result has become fashionable.


    But I suspect muddled thinking has been with us for ever - it is just manifest in a different way.
  7. *GG*

    *GG* senior member

    Concord, NH
    LOL, or as a friend of mine says, you cannot polish a turd!


See more popular forum discussions.

Share This Page