Ok so I had to wade through the
477 page evidence report on which this report is based because I saw during one of the presentations I was watching live that the meta-analysis forest plots looked suspect but I couldn't take a screenshot in time.
In this meta-analysis, which is one of the main outcomes in the context of the overall conclusions of the report, they've compared CBT vs control on SF-36 physical function subscale. Three studies - O'Dowd, Wearden (FINE trial) and White (PACE trial) - are included with a CBT arm and 2 control groups (see footnotes in the graph). What they've done is enter the same CBT group from those studies twice, comparing it to each of the two control groups. You can't do that. They are the same participants featuring twice in one analysis. It's like taking two bites at a cherry.
For a complex data structure like this you need more advanced statistical techniques to pool data because you have one treatment arm (in this case CBT) being compared to two control groups and you can't just duplicate the CBT group because this massively inflates the contribution of that particular study to the overall effect size. A sensible option here would be to combine the two control groups into one and compare that merged control group to CBT. Or you could just compare the CBT arm with one control group that makes the most theoretical sense, preferably chosen a priori before you've seen the data and have had the chance to engage in cherry picking of the control group that makes your treatment look better.
The same problem is in the meta-analysis of GET vs control, the other crucial outcome. The PACE GET arm is entered twice, compared to each of the control groups in that study.
Secondly, what's suspicious is that they don't report heterogeneity statistics (they say in the method section they computed Q and I-squared but I can't find the numbers anywhere in the report). Why not? This wouldn't be accepted in a peer-reviewed publication. Is it because there is significant heterogeneity and so they would have to do sensitivity analyses to figure out what the sources of heterogeneity are? It looks to me as though the studies using Oxford criteria show an effect while the others don't.
Oh and by the way, look at that ludicrous outlier, Deale 1997. LOL. Simon Wessely is the senior author on that paper. Obviously a bogus study never to be replicated.
View attachment 9297
I don't know what the deal is with these governmental agencies but incompetence in their literature reviews is rife and I don't know if they have statisticians involved or is it just non-expert bureaucrats and their research assistants plugging random numbers into Revman with no clue of what they are doing. I wouldn't ascribe this to conspiracy because I've seen worse than this in a report on another illness that's not politically controversial. Back when I was still working I remember looking at an FDA report of a treatment for that illness and the meta-analysis was faulty, laughable really.