Talks about alternative medicine but I think it would equally apply to most psychological research.
http://edzardernst.com/2013/09/can-...l-which-inevitably-produces-a-positive-result
Can one design a clinical study in such a way that it looks highly scientific but, at the same time, has zero chances of generating a finding that the investigators do not want? In other words, can one create false positive findings at will and get away with it? I think it is possible; what is more, I believe that, in alternative medicine, this sort of thing happens all the time. Let me show you how it is done; four main points usually suffice:
That’s about it! No matter how ineffective our treatment is, there is no conceivable way our study can generate a negative result; we are in the pink!
- The first rule is that it ought to be an RCT, if not, critics will say the result was due to selection bias. Only RCTs have the reputation of being ‘top notch’.
- Once we are clear about this design feature, we need to define the patient population. Here the trick is to select individuals with an illness that cannot be quantified objectively. Depression, stress, fatigue…the choice is vast. The aim must be to employ an outcome measure that is well-accepted, validated etc. but which nevertheless is entirely subjective.
- Now we need to consider the treatment to be “tested” in our study. Obviously we take the one we are fond of and want to “prove”. It helps tremendously, if this intervention has an exotic name and involves some exotic activity; this raises our patients’ expectations which will affect the result. And it is important that the treatment is a pleasant experience; patients must like it. Finally it should involve not just one but several sessions in which the patient can be persuaded that our treatment is the best thing since sliced bread - even if, in fact, it is entirely bogus.
- We also need to make sure that, for our particular therapy, no universally accepted placebo exists which would allow patient-blinding. That would be fairly disastrous. And we certainly do not want to be innovative and create such a placebo either; we just pretend that controlling for placebo-effects is impossible or undesirable. By far the best solution would be to give the control group no treatment at all. Like this, they are bound to be disappointed for missing out a pleasant experience which, in turn, will contribute to unfavourable outcomes in the control group. This little trick will, of course, make the results in the experimental group look even better.
Now we only need to run the trial and publish the positive results. It might be advisable to recruit several co-authors for the publication – that looks more serious and is not too difficult: people are only too keen to prolong their publication-list. And we might want to publish our study in one of the many CAM-journals that are not too critical, as long as the result is positive.
Once our article is in print, we can legitimately claim that our bogus treatment is evidence-based. With a bit of luck, other research groups will proceed in the same way and soon we will have not just one but several positive studies. If not, we need to do two or three more trials along the same lines. The aim is to eventually do a meta-analysis that yields a convincingly positive verdict on our phony intervention.
You might think that I am exaggerating beyond measure. Perhaps a bit, I admit, but I am not all that far from the truth, believe me. You want proof? What about this one?
Researchers from the Charite in Berlin just published an RCT to investigate the effectiveness of a mindful walking program in patients with high levels of perceived psychological distress.
To prevent allegations of exaggeration, selective reporting, spin etc. I take the liberty of reproducing the abstract of this study unaltered:
Participants aged between 18 and 65 years with moderate to high levels of perceived psychological distress were randomized to 8 sessions of mindful walking in 4 weeks (each 40 minutes walking, 10 minutes mindful walking, 10 minutes discussion) or to no study intervention (waiting group). Primary outcome parameter was the difference to baseline on Cohen’s Perceived Stress Scale (CPSS) after 4 weeks between intervention and control.
Seventy-four participants were randomized in the study; 36 (32 female, 52.3 ± 8.6 years) were allocated to the intervention and 38 (35 female, 49.5 ± 8.8 years) to the control group. Adjusted CPSS differences after 4 weeks were -8.8 [95% CI: -10.8; -6.8] (mean 24.2 [22.2; 26.2]) in the intervention group and -1.0 [-2.9; 0.9] (mean 32.0 [30.1; 33.9]) in the control group, resulting in a highly significant group difference (P < 0.001).
Conclusion. Patients participating in a mindful walking program showed reduced psychological stress symptoms and improved quality of life compared to no study intervention. Further studies should include an active treatment group and a long-term follow-up
This whole thing could just be a bit of innocent fun, but I am afraid it is neither innocent nor fun, it is, in fact, quite serious. If we accept manipulated trials as evidence, we do a disservice to science, medicine and, most importantly, to patients. If the result of a trial is knowable before the study has even started, it is unethical to run the study. If the trial is not a true test but a simple promotional exercise, research degenerates into a farcical pseudo-science. If we abuse our patients’ willingness to participate in research, we jeopardise more serious investigations for the benefit of us all. If we misuse the scarce funds available for research, we will not have the money to conduct much needed investigations. If we tarnish the reputation of clinical research, we hinder progress.
http://edzardernst.com/2013/09/can-...l-which-inevitably-produces-a-positive-result