G
Good ways to slant the data in your direction--quite innocently of course!
* Throw all your data into a computer and report as significant any relation where P<0.05
* If baseline differences between the groups favour the intervention group, remember not to adjust for them
* Do not test your data to see if they are normally distributed. If you do, you might get stuck with non-itemmetric tests, which aren't as much fun - this is a common one in psych studies
* Ignore all withdrawals (drop outs) and non-responders, so the analysis only concerns subjects who fully complied with treatment
* Always assume that you can plot one set of data against another and calculate an "r value" (Pearson correlation coefficient), and assume that a "significant" r value proves causation another common one
* If outliers (points which lie a long way from the others on your graph) are messing up your calculations, just rub them out. But if outliers are helping your case, even if they seem to be spurious results, leave them in - a psych favourite
* If the confidence intervals of your result overlap zero difference between the groups, leave them out of your report. Better still, mention them briefly in the text but don't draw them in on the graph—and ignore them when drawing your conclusions
* If the difference between two groups becomes significant four and a half months into a six month trial, stop the trial and start writing up. Alternatively, if at six months the results are "nearly significant," extend the trial for another three weeks
* If your results prove uninteresting, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your intervention worked after all in Chinese women aged 52-61
* If analysing your data the way you plan to does not give the result you wanted, run the figures through a selection of other tests
- Practically every psych study has a variation of this
* Throw all your data into a computer and report as significant any relation where P<0.05
* If baseline differences between the groups favour the intervention group, remember not to adjust for them
* Do not test your data to see if they are normally distributed. If you do, you might get stuck with non-itemmetric tests, which aren't as much fun - this is a common one in psych studies
* Ignore all withdrawals (drop outs) and non-responders, so the analysis only concerns subjects who fully complied with treatment
* Always assume that you can plot one set of data against another and calculate an "r value" (Pearson correlation coefficient), and assume that a "significant" r value proves causation another common one
* If outliers (points which lie a long way from the others on your graph) are messing up your calculations, just rub them out. But if outliers are helping your case, even if they seem to be spurious results, leave them in - a psych favourite
* If the confidence intervals of your result overlap zero difference between the groups, leave them out of your report. Better still, mention them briefly in the text but don't draw them in on the graph—and ignore them when drawing your conclusions
* If the difference between two groups becomes significant four and a half months into a six month trial, stop the trial and start writing up. Alternatively, if at six months the results are "nearly significant," extend the trial for another three weeks
* If your results prove uninteresting, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your intervention worked after all in Chinese women aged 52-61
* If analysing your data the way you plan to does not give the result you wanted, run the figures through a selection of other tests
- Practically every psych study has a variation of this