Edit - Dolphin pointed this out:
Small correction: this isn't the main British Journal of Psychiatry:
Thanks - that's actually quite a big correction, and makes me feel much less positive about it.
Considering it was in the British Journal of Psychiatry, that comment was not as bad as I expected. I found the tone irritating when I first read it, and still a bit did on a re-read, but less so.
Really, the British Journal of Psychiatry should have published a piece saying "oh-oh, our President has been promoting quackery, and then smearing the patients who tried to point this out, and none of us noticed until some American academics took a look at the evidence for us". Submitting a piece like that might require more bravery than British academics are known for. I don't know how much leeway we should give UK psychiatrists for the dire situation of UK academic psychiatry, but I felt a bit more sympathetic on a second reading. It almost read like it was hinting that there was serious trouble here, without wanting to come out and say it.
Some notes, but I've not gone back to the Cochrane review to fact checking all the details, so this is just from memory and I may have missed stuff.
The main annoying thing was that it really failed to address problems with nonblinded trials relying on self-report outcomes properly, particularly important when, as part of trial, patients are given 'empowering' models of illness and positive claims about treatment efficacy. There was this paragraph, but throughout most of the article it was written as if changes in self-report outcomes reflected real changes in health and ignored the key reason for concern from patients:
It is important to remember that releasing
individual patient data does not correct any prior
methodological flaws – it simply opens the data up
for transparent re-interpretation. For example, in
this case it is also alleged that the investigators
(perhaps inadvertently) influenced participants’
self-reports with indiscriminate encouragement
in newsletters sent out during the trial. It is also
alleged that the investigators switched their own
scoring methods mid-trial.
PS: Not just alleged, they state that they did!
Other annoying things:
Good the issue of objective outcomes was mentioned, but fails to mention PACE's objectively measured fitness data, which showed GET was not associated with an improvment in fitness:
Few studies measured objective aerobic capacity
such as maximal oxygen consumption (VO2
max), although White et al (2011) did ask people
to undertake a 6-minute walking test in order to
examine real-world effects.
Given the problems with it, I think it was good PACE attracted controversy, rather than being mindlessly accepted in the way that it was by so many of the thickos in British medicine.
Unfortunately, the PACE study has since attracted
huge controversy. Patient groups have long been
critical of the CFS concept (as well as CFS trials
in general), but the criticism of the PACE trial
came from both patients and professionals. The
matter could have been easily resolved if the
original authors had issued a prompt correction
or released suitably anonymised primary data.
Also, the professionals criticising PACE have recognised that it was patients who led the way - the leadership role of patients does not exactly come across here, and generally I didn't like that way patient's concerns were written about, especially as I reckon half the members of this forum could tear apart some of this guys claims in a debate. Good to have problems with data sharing mentioned, but that could not have easily resolved the problems with the trial's design, and he does recognise this later on, in a section I already quoted:
It is important to remember that releasing
individual patient data does not correct any prior
methodological flaws – it simply opens the data up
for transparent re-interpretation.
Only some of PACE's data has been released (hence the new expression of concern at PLoS):
Under court order, the PACE study’s authors
finally were required to release their raw data in
September 2016 and it is now publicly available
-It largely fails to criticise the Larun review, which is odd given the problems recognised by the author about PACE and the presentation of results, yet dismissed by Larun even when these issues were raised with her in submitted comments. The comments from Courtney and Kindlon do a good job of showing the problems with the review, and Larun's responses are embarassing.
Good things:
-Some little important details were clarified that often are not. eg the limited nature of Specialist Medical Care in PACE: "In the PACE study, the control arm received specialist medical care alone (effectively, treatment as usual)".
-Mentioned the lack of things like actometer data, and say this data would be useful.
This paragraph gets the figures right, and is a fair summary of things. Although it would have been good to get an explanation of the problems with the earlier recovery critieria:
Independent re-analysis examined data for
recovery at the end of the trial and findings were
also disappointing (Matthees 2016). The recovery
rates using a priori thresholds were as follows: 3.1%
for specialist medical care alone, 6.8% for CBT,
4.4% for GET and 1.9% for adaptive pacing therapy,
with no significant differences between groups.
The PACE authors themselves maintained that
CBT and GET were associated with significantly
increased recovery rates of 22% at 52-week follow-
up, compared with only 8% for adaptive pacing
therapy and 7% for specialist medical care alone
(White 2013). Both reports were different from
the editorial claims that appeared in the BMJ at
the time of initial publication of the PACE study,
which suggested that 28–30% of patients recover
using CBT and GET (Knoop 2011).
He gets this right and clear:
Long-term follow-up at 2.5 years found that
any differences apparent between treatment arms
at 52 weeks were lost as adaptive pacing and
specialist medical care caught up with CBT and
GET (Sharpe 2015).
Here's the conclusion:
Conclusions
At face value the overall findings are that exercise
therapy is somewhat effective for CFS, particularly
when compared with treatment as usual, in that it
reduces symptoms at the end of therapy, possibly
with some sustained benefits. The hidden detail
is, as usual, rather more complicated. Exercise
therapy is probably the most effective of the
modalities studied in terms of daily function, as
measured by a walking test, but results are so
poor that, despite being statistically significant,
they are no cause for celebration. Recovery rates
are similarly disappointing. Independent re-
analysis of the PACE data found that only about
3% recover with standard medical care, which
tells us that standard medical care is not working
adequately for patients with CFS and we need to
re-examine why it is so ineffective. Only about
4–7% of patients recover in active treatment over
3–6 months, which is a significant improvement in
terms of relative risk, but not in terms of absolute
risk change.
Beyond the raw results this controversy has a
number of critical lessons. First and foremost, it
is imperative for researchers to publish studies in
the most open and transparent manner possible.
This may include responding to requests for
methodological clarification, requests for re-
analysis and even requests for primary data.
In some online journals there is actually a
requirement to release such data on request. It
is remarkable that it is still not normal practice
for researchers to reveal or copublish their actual
anonymised raw data. A second lesson is that
clinicians and researchers should work more
closely with patients in both study design and
study interpretation. Clinicians and academics
may not have the same views on what is and is not
acceptable therapy for patients. The third lesson
is that, to promote acceptability, psychosocial
treatments should be integrated into medical care.
In practical terms this means that patients should
be offered these options as an optional add-on
while in medical care, not as a way of discharging
patients perceived as difficult into a mental health
service. One major reason for low parity of esteem
is that physical concerns are overlooked in patients
with mental health complications. May patients
with CFS need psychological support and, where
necessary, mental health input, but not at the
expense of thorough medical care.
Overall: step in the right direction, and I now feel a confused mix of anger and gratitude, but we've got a long way to go.