• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

New criticism of PACE

anciendaze

Senior Member
Messages
1,841
It is very interesting that no patients received the combination of GET and CBT, yet this is the recommended treatment combination, so to speak.
How do you conclude that 3/4 patients did not receive any given therapy?
There were four arms to the study with different therapies in each. Choose any given therapy for one arm, the remaining 3/4 of the study cohorts did not receive it.

(This does run into a problem if you consider specialist medical care a therapy, not a control. The PACE authors were not consistent about this.)

This is evidence that the constant emphasis on the "massive study involving 640 patients" was being used to deliberately mislead journalists. Had the authors intended to give accurate information they could have talked about approximately 160 patients in particular arms receiving a given therapy. They did not for the excellent reason that even statistically-naive journalists might have realized that the number of positive results was small enough to be influenced by a handful of individual cases. Talking in terms of percentages fuzzes the effect of individuals, and adding decimal points gives a spurious sense of scientific rigor and accuracy.

When we go back and use the original protocol, as far as possible, to count "recovered" patients, we confront this directly when we see 5 "recoveries" in the "control" group versus 7 in the "GET" group. (You may count these differently, but you will still be talking about small numbers.)

Here I want to mention that this also runs into the controversy over "harms". If you deliberately exclude those made worse, then random variation will offer you the opportunity to claim benefits from interventions that are completely worthless. The catch the PACE authors introduced, beyond the first protocol, was the distinction between reporting "adverse events" and "adverse responses".

What I see is the number of "adverse events" approximately doubling in the GET group, while "adverse responses" remained the same. What is the difference? The PACE authors investigated the "adverse events" and concluded most were not the result of the therapy involved. If we only wanted to sample the authors' opinions we could have saved millions of pounds.

I believe we have seen two patients from the GET arm of the study come forward on this forum and say they were made worse. (Somebody correct me if this is not so.) If this is so, we have good reason to believe the harms resulting from GET balanced the recoveries in the actual PACE cohort. This would mean there was a null result being reported as positive due to "reporting error".
 

Barry53

Senior Member
Messages
2,391
Location
UK
There were four arms to the study with different therapies in each. Choose any given therapy for one arm, the remaining 3/4 of the study cohorts did not receive it.

(This does run into a problem if you consider specialist medical care a therapy, not a control. The PACE authors were not consistent about this.)

This is evidence that the constant emphasis on the "massive study involving 640 patients" was being used to deliberately mislead journalists. Had the authors intended to give accurate information they could have talked about approximately 160 patients in particular arms receiving a given therapy.
I was about to reject what you say, but now a rethink.

I suppose PACE could be considered to actually have been 3 trials being run in parallel:-

SMC/APT
SMC/GET
SMC/CBT

And that the PACE authors have made it look like a single trial. Is this what you are getting at?

I do not know what is the norm for clinical trials, and what does and does not legitimately count as a single trial. But an interesting notion. And if you were to push this observation to the limit, you could extend the 'principle' and they could have run 639 different trial arms, with just one control, and by PACE's logic still have called it a big trial!
 

JoanDublin

Senior Member
Messages
369
Location
Dublin, Ireland
This is difficult to fully understand but seems another mail in the coffin for pace. If possible Sonia Lee should talk through our give a written commentary to her slides as it would be useful addition to the PACE debate.


She has mentioned on Twitter that there is a full report coming as soon as she gets the time to do it. Maybe a thank you to her on Twitter would help her to find that time as soon as possible? She's @openmylab on Twitter
 

anciendaze

Senior Member
Messages
1,841
I was about to reject what you say, but now a rethink.

I suppose PACE could be considered to actually have been 3 trials being run in parallel:-

SMC/APT
SMC/GET
SMC/CBT

And that the PACE authors have made it look like a single trial. Is this what you are getting at?
...
The clue for me about deliberately misleading journalists was the number of these who went away convinced that 640 patients received CBT+GET, a combination that was not tested at all.

I also went through a long period of trying to figure out where they were getting numbers like a 60% success rate. Until people here who had gone into the details corrected me, I had rejected the idea they were merely reporting the number improved as absurd, since this would imply that their control arm showed about 45% success. It is not at all unlikely a null result with random variation will show 45% improved, 45% deteriorated and 10% unchanged. This kind of argument would show that SMC was far more cost-effective than the touted therapies, a conclusion the authors did not want.

Journalists were also misled about the massive selection effects in this trial. Excluding 2260 patients out of 3158 shows that most of what GP/PCPs consider "CFS" does not meet the implicit PACE criteria for the illness. They were treating something different than appears in most clinical practice.

In concrete terms they showed that if you send them 790 patients (1/4 of 3158), and allow them to select 160 they wish to treat, they can achieve 7 "recoveries" via GET instead of 5 -- if you assume nobody was made worse. This is a complete rejection of the favored research hypothesis by the original protocol the authors proposed. Even that is not entire true because they simply dropped the objective measures which might show if patients were displacing activity to participate. This means they have no idea if patients were actually increasing total activity during the trial, let alone afterward.
 

user9876

Senior Member
Messages
4,556
I was about to reject what you say, but now a rethink.

I suppose PACE could be considered to actually have been 3 trials being run in parallel:-

SMC/APT
SMC/GET
SMC/CBT

And that the PACE authors have made it look like a single trial. Is this what you are getting at?

I do not know what is the norm for clinical trials, and what does and does not legitimately count as a single trial. But an interesting notion. And if you were to push this observation to the limit, you could extend the 'principle' and they could have run 639 different trial arms, with just one control, and by PACE's logic still have called it a big trial!

Its is a single trial with multiple hypotheses as such corrections should be done on significance tests
 

anciendaze

Senior Member
Messages
1,841
Its is a single trial with multiple hypotheses as such corrections should be done on significance tests
In a purely statistical sense you are correct. The problem I'm more concerned about is misrepresentation of results to people who were not going to dig deep into the statistical details and make critical appraisals. The long series of methodological errors are hard to explain as anything except deliberate obfuscation. Shielding detailed data from critical scrutiny was essential to this scheme.

Another example turned up when they quoted gains in the six-minute walk test for the GET group, but ignored the fact that such gains were too small to be clinically significant, (even in patients with heart failure,) and were done with a test that patients could decline while still being considered part of the trial. The "step test", which was required of all, showed no such gains, and this result was withheld for quite a while. It looks to me like the people who felt worse after GET declined the test, and were not included in the data, while those who felt better were included. Claiming gains while ignoring losses is a standard way to produce a signal when the data are pure noise.

It was when I realized this that I finally understood that the result was a complete zero. What they showed instead was that if you argue with people for a year, and bombard them with propaganda, you can shift opinions without bestowing any objective benefit whatsoever. This is not something that needs to be scientifically demonstrated again, charlatans do it all the time. Even that shift of opinion did not persist.
 

Woolie

Senior Member
Messages
3,263
I was about to reject what you say, but now a rethink.

I suppose PACE could be considered to actually have been 3 trials being run in parallel:-

SMC/APT
SMC/GET
SMC/CBT

And that the PACE authors have made it look like a single trial. Is this what you are getting at?

I do not know what is the norm for clinical trials, and what does and does not legitimately count as a single trial. But an interesting notion. And if you were to push this observation to the limit, you could extend the 'principle' and they could have run 639 different trial arms, with just one control, and by PACE's logic still have called it a big trial!
This is a good thing. The multiple arms provide additional controls for themselves. e.g., APT provides a good control for some of the things missing in the control condition (facetime, patient-therapist relationship, etc.).

The numbers of participants for each trial were not picked at random, but were chosen based on a power analysis (using assumptions from previous tasks about the percentage likely to improve). So if there'd been only two arms to the trial, their power analyses would have required them to use 320 patients - if you get my meaning.

PACE is shit, but this isn't where the shit is.

@anciendaze, in their post above, describes just a few of the shitter aspects (nothing happened on the fitness measures, for example).
 
Last edited:

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
I wasn't keen on this presentation. It doesn't have a lot of focus, just tries to smear everything. It must be a Masters project or something. It looks laid out as if the author used some sort of textbook or intro chapter about "good practice", then just went down the list, looking for as many possible violations as they could.

It also didn't feel like an objective piece, it came over as though the author was personally motivated to find as many faults as possible, without any attempt to evaluate their importance to the overall conclusions. So that will also detract from its impact.

Yes, good points.

Focus is important, otherwise it comes across as a Malcolm Hooper style nitpicking exercise that no one ever reads.