I think an important point has slipped by. The PACE authors referenced the Oxford definition, and more or less followed this in what they published, but also said the results applied to patients diagnosed by other criteria including the CCC for CFS and ICC for ME. Out of the 3158 patients referred for treatment by PCP/GP doctors, they excluded 2260. At a later date they modified entrance requirements in a way that permitted more patients to enroll in the trial from that date on.
All this reveals that the criteria used by the PACE study were considerably different from those understood by the referring physicians. We also have the problem that Oxford with massive exclusions plus loosened requirements is not the same as Oxford criteria themselves. Detailed analysis of the course of diagnosis and exclusion, where possible, indicates this study used
ad hoc criteria to define patients suitable for study that do not exactly match any published criteria.
At this point we come to the great bait-and-switch: PACE authors then claimed their results applied to
all patients diagnosed with CFS or ME by any criteria. Statements that they used strict diagnostic criteria notwithstanding, those authors repeated public statements implying their results applied to all patients considered CFS patients by GP/PCP doctors. They also repeatedly said this was a "massive study" with 641 patients, exploiting the fact that most news media would not realize that 3/4 of the patients in the study did not receive any given therapy, and no patients received the touted combination of CBT+GET.
Coupled with the change in metrics during the study this produced a mismatch that is comical. By the PACE authors own original protocol, applied in
the reanalysis paper with Wilshire, Kindlon. Matthees and McGrath, we discover that GET may have produced useful results in 2 more patients than the specialist care the authors considered a control. No competent researcher is going to claim statistically-valid results based on a couple of individuals. This low rate of benefits, by the original protocol, makes concerns about diagnostic criteria vitally important because a tiny rate of misdiagnosis could completely confound any results.
Did the authors claim they could not have misdiagnosed 2 patients out 3158? If so, where are the published criteria they used? What evidence do they present for such a phenomenally low rate of diagnostic error about a particularly controversial illness, when some of them have published claims that other doctors are in error 30% of the time? They are implicitly claiming accuracy that pathologists examining samples under a microscope do not achieve.
This is not merely a single error among a plethora of questionable practices, it is an internal inconsistency in the argument that patients not included in the study would also benefit from the touted therapies. You cannot simultaneously claim to apply very narrow diagnostic criteria, as shown by the exclusions, and very broad interpretation of results to patients that don't meet such narrow criteria.