Publication Bias in Psychology: A Diagnosis Based on Correlation between Effect Size and Sample Size

Dolphin

Senior Member
Messages
17,567
Free full text: http://www.plosone.org/article/info:doi/10.1371/journal.pone.0105825

RESEARCH ARTICLE

Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size


Anton Kühberger

Astrid Fritz,

Thomas Scherndl


Published: September 05, 2014DOI: 10.1371/journal.pone.0105825


Abstract


Background


The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon.

Therefore, additional reporting of effect size is often recommended.

Effect sizes are theoretically independent from sample size.

Yet this may not hold true empirically: non-independence could indicate publication bias.


Methods


We investigate whether effect size is independent from sample size in psychological research.

We randomly sampled 1,000 psychological articles from all areas of psychological research.

We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values.


Results


We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size.

In addition, we found an inordinately high number of p values just passing the boundary of significance.

Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings.


Conclusion


The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
 

A.B.

Senior Member
Messages
3,780
Do I understand this right? In plain English, psychology studies with few participants report great effects, but the more participants there are, the smaller the effect. This suggests that many studies select participants in a biased manner, or that small studies are being done and redone until they show the desired outcome. The barely statistically significant findings also suggest that data is tortured until it confesses, or that studies are being done and redone until by sheer chance the desired result is obtained.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Do I understand this right? In plain English, psychology studies with few participants report great effects, but the more participants there are, the smaller the effect. This suggests that many studies select participants in a biased manner, or that small studies are being done and redone until they show the desired outcome. The barely statistically significant findings also suggest that data is tortured until it confesses, or that studies are being done and redone until by sheer chance the desired result is obtained.
I haven't read the full paper but my interpretation is the same as your's.
(But I couldn't have summarised it so well.)
 

A.B.

Senior Member
Messages
3,780
In psychological and psychiatric research the odds of reporting a positive result was around 5 times higher than in Astronomy and in Psychology and Psychiatry over 90% of papers reported positive results.

Oh wow, this is funny. Come on, I think you can close that last 10% gap. Then you no longer need to do research because you're always right anyways.
 
Back