Invisible Illness Awareness Week 2016: Our Voices Need to Be Heard
Never heard of Invisible Illness Awareness Week? You're not alone. Jody Smith sheds a little light to make it more visible
Discuss the article on the Forums.

(just general) "What effect size would you expect?" (blogpost)

Discussion in 'Other Health News and Research' started by Dolphin, Jan 16, 2014.

  1. Dolphin

    Dolphin Senior Member

    Messages:
    10,734
    Likes:
    28,338
    (This may (?) be a bit complicated for somebody who has never taken a statistics course).

    I came across the following blog post (it was linked to by another blog):

    It highlights how, perhaps surprisingly, effect sizes of bigger than d>0.1 will occur very frequently by chance, especially with smaller studies.
     
    Last edited: Jan 16, 2014
  2. alex3619

    alex3619 Senior Member

    Messages:
    12,836
    Likes:
    36,448
    Logan, Queensland, Australia
    This is made worse by the issue that negative studies often don't get published, or get published in low impact journals.
     
    SickOfSickness likes this.
  3. Simon

    Simon

    Messages:
    1,932
    Likes:
    14,612
    Monmouth, UK
    I was a bit disappointed by the piece - Simons fails to mention confidence limits at all, and most of those effect sizes >0.1 would not be significant ie still a null result. Makes me wonder about psychology professors and stats....

    The kind of simulations he talks about - drawing random samples from two populations with the same mean - are very common and do show bigger effect sizes on smaller studies, but the confidence interval for most of the effect sizes include 0, ie non-significant (sorry, there are blogs etc on just this but can't remember any right now).

    He also warns about replications being judged on just having the same direction of effect size (eg group A bigger than B in original in and replication) and this could be common - even if no true difference - with an effect size of 0.1 on replication. But even if the replicated effect size of 0.1, or 0.2, was significant (it wouldn't be, unless the study was huge) it would impress no one precisely because it is a trivial effect size. So he seems to be tackling a straw man argument here.
     
    Snow Leopard, Dolphin and biophile like this.
  4. Dolphin

    Dolphin Senior Member

    Messages:
    10,734
    Likes:
    28,338
    Thanks. I forgot to look out for the significance/confidence intervals point.

    I came to it via this analysis of a paper:
    http://asehelene.wordpress.com/2014...-glass-into-an-oddly-analyzed-clinical-paper/
    where the authors of the paper mentioned effect sizes without highlighting importance of confidence intervals:
     
    Last edited: Jan 17, 2014
    Simon likes this.

See more popular forum discussions.

Share This Page