I'm going to bring up a subject that has pretty well vanished from discussion, though it was certainly a topic early on. The proposal that got them funding included measuring total patient activity using actimeters to see how this changed during and after therapy. This, being a real objective measure, was made secondary to subjective responses on questionnaires, and later dropped entirely, after the devices were purchased.
There is one aspect of the proposed experiment which seems to have been lost in the shuffle: if you don't know a patient's total activity before and during therapy you have no idea if they are displacing activity to participate in the program, as happened in other studies. It is possible the premise that GET involved an increase in activity was never met. Patients may not have been experiencing an increase in exercise at all, merely a shift in how they spent a limited energy budget.
What I'm getting at here is not that they don't know if total activity increased as a result of therapy, which is a good separate question, it is that they don't know if these therapies even represented an increase in patient exercise. If some patients displaced activity to participate, while others did not, the number of patients actually subject to the experimental protocol was smaller than it appears. If all patients in the GET group made the same total expenditure of energy during therapy as patients in other groups there was no experiment at all, apart from exhortation.
This should be a concern in a scientific paper, even if the statistical results were not already so weak as to allow all positive results to be determined by a handful of patients selected from a huge pool of potential patients. I've said before that absolute numbers of responding individuals were so small that modest diagnostic errors putting a few patients with real psychological problems in with a larger number of patients with a physiological problem could account for everything. Some of the same authors published the claim that other NHS doctors made diagnostic errors 30% of the time w.r.t. CFS, while implicitly claiming zero diagnostic error for themselves. I would expect this to provoke outrage from maligned professionals.
Aside from those patients deliberately excluded as having physiological illness, we have very little evidence to back the tacit claim that the cohort of CFS patients they were dealing with, including those who were not included in the study, had uniform diagnoses. With the concern I voice above we are also left wondering how many of those in the study were actually exposed to graded total activity. If you don't even know what went into an experiment, how can you make sense of what came out?