Actually PACE wasn't methodologically robust to start with - it was hopeless from the start. ... What I think will become clear from PACE is that adequate methodology for trials of psychological treatment simply does not exist at present.
There's certainly a lot in this, but I think it''s overstated. Specifically, I think it the PACE trial was capable of showing CBT or GET worked - had they actually been effective - if it had been set up and in particular interpreted properly - with an example below of how that could look.
Self-reports not useless (but need to be interpreted with caution)
First off, fatigue is an important problem for many of us and it can currently only be measured subjectively by self-reports (there are alternatives for physical function which can be measure via actometers for activity levels or even the six minute walking test).
The problem with self-reports is that are prone to response bias (patients unconsciouly giving researchers the 'right' answer that things have improved), particularly in an unblinded trial without a meaningful control group, as was the case i PACE.
However, research and common sense indicates response bias is generally a small to modest effect (try this SF36 demo to see how you might tweak your answers to please - most answers are straightforward unless you lie, with just a few that could change eg from 'limited a little' to 'not limited at all:
SF-36® Health Survey Scoring Demonstration - just answer Q3 on physical function, and click "score the survey" the bottom of the page
So if self-reports show more than a small-modest effect, they may well be showing something real
In the case of physical function, the PACE trial had objective measures (walking distance, and fitness). Crucially, these measures showed no (or minor change) while self-reports showed clinically significant gains. This should have been a red light to the researchers. Self-reports need to be compared with objective measures where possible to check for sings of response bias. PACE failed to do this
In fact, the nulll result at long-term follow-up (ie after the formal trial and therapy had finished, and participants were unlikely to be stil influenced by researcher expectations) are exactly what you'd expect if one-year gains were due to response-bias instead of real progress.
Had self-report gains been matched by objective measures for physical function, that would have been evidence for success, and evidence the self-reports for
fatigue was showing something useful.
Objective measure could have helped show succes too
And of course, if CBT/GEt had really worked, we'd be seeing large gains in objective measures such as walking distance, fitness and employment hours/benefit status
What a successful PACE trial would have looked like (for CBT/GET vs SMC)
- Large gains in self-reported fatigue and physical function
- Large numbers of patients rating themselves very much better
- Substantial gains in objective measures including fitness, physical function and employment/benefit rates
- Means for patients in the successful therapy groups moving towards those for health populations, and the means for recovered patients would match those for healthy populations too
OK, so the trial data that's been published so far shows PACE CBT/GET did not work, but I don't think that is a meanginless or uninterpretable finding. In fact, I think PACE is good evidence that CBT/GET aren't much use in mecfs.
Certainly it's much harder to set up and interpret a study using self-reports in unblinded studies, which applies to much psychological research. But I don't think its by any means impossible, if researchers are aware of the pitfalls and interpret results accordingly.