This study will end up being a big problem for the ME community in the future, worse than the Lancet PACE paper. I apologise in advance for the long post.
Systematic reviews and meta-analyses of the literature are considered the gold standard of "evidence based medicine" by doctors and policymakers. If this meta-analysis shows that CBT/GET are effective treatments for CFS - and it will most certainly show that for reasons I will explain below - it will be used to further enforce the pragmatic rehabilitative "treatment" approaches based on the biopsychosocial model. No one will read the actual paper to discover that these studies used laughable inclusion criteria, dodgy self-reported outcome measures (questionnaires of subjective "fatigue") and that where objective outcomes were used, there was no improvement in actual physical activity of patients and no improvement or indeed worsening of reliance on state disability payments. The efficacy of GET will be enshrined in what we will be told in doctors' offices is the highest level of evidence according to the hierarchy of evidence in evidence based medicine - a meta-analysis of individual patient data.
In normal circumstances, a meta-analysis of individual patient data (IPD) is the highest quality type of meta-analysis because what it does is it collates actual raw data for each individual patient who took part in the primary studies and includes those patient-level data into one big analysis. This technique increases statistical power and improves reliability of the findings. You might be surprised to hear that IPD meta-analyses are infrequently done. Most meta-analyses rely on the published reports to extract data from and they use statistical techniques to pool data from group means and standard deviations from various studies (in other words, most meta-analyses collate statistical averages from the published studies). This approach has a number of limitations of course and various statistical techniques have been developed in order to deal with them. An individual patient data meta-analysis avoids many of these pitfalls but it is not normally done because it requires obtaining actual raw datasets from researches who conducted the primary studies. Researches generally do not want to share their raw data with people doing meta-analyses. This group, however, is in a position to do an IPD meta-analysis since they carried out some (most?) of the studies that will be included so of course they won't need to hand the datasets over to an independent group doing the meta-analysis or rely on many other groups to send them the data.
If IPD meta-analysis is the gold standard, why am I so worried about this being done on CFS? Well, this technique increases the statistical power to detect very small effect sizes and when you collate data from like 1000 patients, those trivial effect sizes just become amplified. This can turn a bunch of negative primary studies and a handful of weakly positive studies into a statistically significant effect.
There are published examples of this in other fields. I remember reading
this meta-analysis a few years ago and shaking my head.
Basically, a bunch of negative studies showing no effect of lamotrigine (an anticonvulsant drug) on depression in bipolar disorder suddenly becomes an effective treatment according to an individual patient data meta-analysis. How many individuals in possession of a prescription pad are even capable of appraising this paper or even know what a relative risk of 1.27 means? Needless to say, in the real world, this treatment has no efficacy for depression for most people.
Same with CBT/GET for "CFS". Some studies show a modest effect, some find it utterly useless like the FINE trial where not even statistical significance could be detected, if I recall correctly, let alone clinical significance. This meta-analysis will collect all that trash in one heap and get an effect out of it thanks to statistical magic.
Cui bono?
We live in very strange times. Just the other day I watched the presentation Dr Van Ness did in the UK recently that justy posted on
another thread. We now have objective physical evidence that aerobic exercise is useless or harmful for bona fide ME patients. In severe ME patients, as all of us who have been there know, exercise is quite literally torture. Yet we have these two paradigms coexisting at the same time. One, supported by psychiatry, psychology, the state and vested financial interests, is telling nurses and doctors to put their ME patients on exercise machines on the basis of some poor quality psychiatric studies. Meanwhile, there is this research going on in parallel that is showing with objective tests like CPET that the aerobic system is literally broken in this illness. It's just surreal.
By the way. Patients and doctors (doctors are not scientists contrary to what many people think) often say how we need big studies with lots of patients etc. Actually, big studies are only needed where the treatment is not very good and produces a small effect which is difficult to detect in a small sample. (Treatment harms are a different matter. You may not know a treatment is harmful until it's unleashed on the general population (i.e. large numbers of people), SSRI antidepressants for example, because if the frequency of adverse events is low, then a randomised controlled trial will often not be able to detect those harms.) However, when it comes to efficacy of treatments, treatments that obviously work like parachutes which are known to prevent death and injury when jumping from heights (great parody in BMJ
here), doing big multicentre randomised controlled trials is stupid, harmful and unethical.
A treatment as worthless as CBT/GET needs a meta-analysis like this to show an effect.
"If a treatment has an effect so recondite and obscure as to require meta-analysis to establish it I would not be happy to have it used on me." - H.J. Eysenck