• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Dr David Tuller: More CBT Research from Sir Simon and Professor Chalder, Part 2

Countrygirl

Senior Member
Messages
5,476
Location
UK
https://www.virology.ws/2020/09/20/...QejsumZrqL81UjFWCbHD0QxZu3kDvCNUwKuIyzMQw8LeQ

Trial By Error: More CBT Research from Sir Simon and Professor Chalder, Part 2
20 SEPTEMBER 2020

By David Tuller, DrPH

And now Professor Sir Simon Wessely has popped up again to present more misinformation in a paper produced along with his longtime King’s College London colleague, Professor Trudie Chalder. I wrote about this paper last month, when it appeared in a pre-print version after having been accepted for publication by the Journal of the Royal Society of Medicine.

The journal has now officially published the paper, and it remains the same unacceptable nonsense that it was in its pre-print days. As I noted previously, Professor Chalder, Sir Simon and their colleagues explicitly acknowledge that the study design does not allow for causal inferences–and yet they also argue the intervention “led to” beneficial outcomes. Apparently Sir Simon and Professor Chalder need lessons in basic scientific reasoning. Someone should have informed them that such conclusions cannot be drawn from the data collected and presented here.

It is disturbing that experienced investigators would make such an elementary mistake, and that none of the reviewers or editors who read the paper noticed or cared about these unjustified statements. This kind of mistake often reveals what the investigators believe, whatever the evidence actually indicates. Presumably their cronies–uh, colleagues–or whoever peer-reviewed the paper believe the same thing.

Moreover, the study suffered from significant rates of drop-out, which are not mentioned in the abstract. More than 30 % of the participants did not complete any questionnaires at discharge and follow-up, but that figure seriously understates the rates of non-response on specific instruments. In some instances, more than half the respondents failed to provide data for a questionnaire at some assessment point.
 
Last edited by a moderator:

Wishful

Senior Member
Messages
5,751
Location
Alberta
I'm presently reading "Standard deviations, Flawed Assumptions, Tortured Data, and other ways to Lie With Statistics", by Gary Smith. It has plenty of examples similar to what Tuller wrote about. Cherry-picking data is quite common. Sometimes the researchers use unspoken blatantly false assumptions, such as labelling the patients who dropped out of the study as 'cured'. The book has certainly raised my level of skepticism to even peer-reviewed reports in prestigious journals.

I guess the present system offers too much reward for publishing something that looks good, and not enough reward for actual good work. :grumpy: