A topic for discussion if anyone is interested. I have just been laying down the law on a particularly bad case of p-hacking and unjustified assumption that correlation implies causation in yet another psych study. post #8 on this thread: http://forums.phoenixrising.me/inde...sment-predicts-me-severity.53509/#post-889232 In summary, here's what I said: 1. p hacking. Carry out lots of statistical tests on a pile of data. Look for any that happen to fall just below the magic p = 0.05 level, and attribute meaning to what is probably a chance variation. If you do enough statistical tests on a large completely random set of data, some of them are sure to fall in the p less than 0.05 category by chance. That's why psychologists love p hacking - they can do lots of questionnaires, run them through stats packages they probably barely understand, search for magic numbers less than 0.05, and hey presto, a published paper. They have 'discovered' something. WRONG. 2. Secondly, assume correlation implies causation. They find most of the factors they studied were not statistically significant, but luckily one was, so they build a theory around it. Hey presto, a published paper that has 'discovered' something clinically significant. WRONG. ........................................... We are all aware that this goes on a lot in psychological research. Esther Crawley is a master of the art (in the worst possible sense). That's partly why I am so against her running MEGA - it will give her access to thousands of questionnaires full of lovely data for her to hack her way through till retirement, digging out all sorts of meaningless correlations and giving her more opportunities to blame patients and parents. So far, so uncontroversial. .................................................. But what about all the lovely biomedical studies? We hear for example that some researchers somewhere have found a biomarker, using quite a small sample of patients and a lot of data. But did they just p hack through the data and find a p value less than 0.05 and shout bingo! I don't know. And other small studies have come up with all sorts of interesting sounding 'significant' findings that other teams don't seem to be able to replicate. And publication and future research funding can depend on having a track record of finding interesting and significant results. And jobs depend on publication and publication depends on 'significant' results. I think the good scientists are well aware of this, and make it clear that their findings are preliminary and need to be validated in further large unrelated cohorts. But are we (and I include myself in this) so eager to hear about biological evidence that we jump on every 'significant' p value, and want to to know the implications for treatment before it has even been validated? Discuss.