• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
I think we have two problems here. One is, as you say, due to the poor definition of GET. It is likely that GET was applied with far more sensitivity in the PACE trial than most people experience it through ME centres and the like.

The other is that CBT and GET would benefit a number of healthy people that I know, as it would a wide variety of people with different illnesses. My guess is that people with ME would show less improvement than a random sample of healthy people, simply because I do not believe that these therapies are relevant to ME, and can actually do harm. The problem with measuring the effect though is that many measurements are designed at measuring the ill, and healthy folk tend to clump at the top end. Effects on the 6 minute walking test would be interesting though. Would healthy folk show a greater improvement after a year of walking each day than people with ME? Is that an obvious question? Is it obviously an obvious question?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
If some patients found a treatment helpful but a significant proportion found it harmful then approval would be hard depending on the relative amounts of harm vs helpfulness. They have failed to identify subgroups that fit either camp if that were the case for a drug I'm not sure if it would be approved. They seem to suggest that harm is down to the way GET is delivered but if a protocol is not well enough defined to be safely reproduced then it is not safe (or not a protocol) hence shouldn't be being rolled out.

This got me thinking that, like you say, AfME seem to be assuming that if GET is given sensitively, by an insightful and flexible therapist, that it will not be harmful. But we don't actually have the deterioration rates for the PACE trial yet, which might demonstrate that a significant minority of patients deteriorated when treated with GET, even in a clinical trial setting, and with an adaptive pacing-like approach to setting a baseline and responding to set backs etc.
 

user9876

Senior Member
Messages
4,556
This got me thinking that, like you say, AfME seem to be assuming that if GET is given sensitively, by an insightful and flexible therapist, that it will not be harmful. But we don't actually have the deterioration rates for the PACE trial yet, which might demonstrate that a significant minority of patients deteriorated when treated with GET, even in a clinical trial setting, with an adaptive pacing-like approach to setting a baseline and responding to set backs etc.


I think AfME in their latest paper (which I've not yet finished reading) are assuming that RCTs provide unquestionable evidence and hence are looking for reasons why it doesn't work in practice. Here they look for what might be going wrong in practice and seem to be blaiming GPs and untrained analysts for failing to use a proper protocol. They imply that the NICE guidelines provide a clear protocol to follow (I've not checked yet) but they fail to test their assertion. There unjustifiable assumptions about the infalibility of RCTs and the clarity of the NICE guidelines lead them to blaim some of those implementing GET.

Looking at their positive comments most seem to be about finally finding someone who understands their condition. I wonder if a large amount of the sucess relates to simply not having a doctor constantly dismiss all symptoms rather than any substantial improvement. The negative comments suggest a lack of sympathy from many therapists but they never question why and whether this is a function of the individuals or the media and other coverage of the RCTs.

From the RCT perspective I would have two theories. Firstly that they are just measuring a placebo effect which is different for different types of therapies but doesn't lead to objective improvements. Secondly I wonder if they set off to not increase activity much with GET so that they minimised the chances of adverse reactions - given their desire to demonstrate the protocols safety. I don't think that they recorded how much patients were told to increase their activities and they didn't report any such figures. Hence we don't really know how their protocol was implemented.
 

biophile

Places I'd rather be.
Messages
8,977
I just love this bold-faced "fact" [cough] in the GET participant manual (in bolded text too):

"The evidence we have is in fact the opposite: there is no evidence to suggest that an increase in symptoms is causing you harm. It is certainly uncomfortable and unpleasant, but not harmful."

Page 79.

http://www.pacetrial.org/docs/get-participant-manual.pdf

The PACE Trial manuals are short on references for any "evidence" mentioned. There is no objective evidence that patients can safety increase total activity levels anyway. As Valentijn pointed out earlier, the available evidence actually suggests no such increases occur. It is also quite cavalier to claim symptom exacerbations are safe per se.

Later on in the advice for relatives, friends, carers, etc: "It is however vital that you do not encourage people to push themselves past their limits. Despite being tempted to increase what you are doing by a small amount, it can prove severely detrimental in patients with CFS/ME."

So, severely detrimental AND not harmful?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
"We need the full methods and results on all the trials that have been conducted on all of the treatments that we use today, to make informed decisions as doctors and patients."

Nice to see Dr Ben Goldacre supporting our requests for trial data (I said without a hint of cynicism) (short BBC video clip):
http://www.bbc.co.uk/news/uk-politics-22957195.

Here's a further discussion, if you can access BBC programs from your location (watch @1.16.08):
http://www.bbc.co.uk/iplayer/episode/b030l0ks/Daily_Politics_19_06_2013/
 

Dolphin

Senior Member
Messages
17,567
http://www.meassociation.org.uk/?p=15790

Forward ME Group | Minutes of meeting in House of Lords | 14 May 2014

These minutes may also be read at www.forward-me.org.uk/14th%20May%202013.htm

FORWARD-ME

Minutes of the meeting held in the television Interview Room, House of Lords

Tuesday 14 May 2013

Present:

Countess of Mar (Chairman), Janice Kent (reMEmber), Bill Kent (reMEmber), Jane Colby (Tymes Trust), Anita Williams (Tymes Trust), Christine Harrison (Brame)

ETA: A thread has now been set up on this at: http://forums.phoenixrising.me/index.php?threads/pace-trial-and-press-complaints-commission.23921
 

Dolphin

Senior Member
Messages
17,567
I'm just writing something I'd like to post today. Can anybody direct me to where it was said that the PACE Trial investigators had been shown the Bleijenberg & Knoop editorial before it was published?
 

biophile

Places I'd rather be.
Messages
8,977
I'm just writing something I'd like to post today. Can anybody direct me to where it was said that the PACE Trial investigators had been shown the Bleijenberg & Knoop editorial before it was published?

I vaguely recall Hooper first making that claim, but I did a quick search and the first thing I found was recent statements by the Countess of Mar writing to White, do a text search for "approve" on this thread to find it:

http://forums.phoenixrising.me/inde...-and-prof-white-and-prof-sir-s-wessely.21545/

Edit (emphasis mine):

...

That you made no attempt at correcting the misinformation in The Lancet Comment by Bleijenberg and Knoop is not surprising, given that the Deputy Editor of The Lancet has confirmed that you approved it before publication: “The Comment in question was reviewed, as is our standard practice, by the authors of the accompanying PACE trial” (letter dated 22nd January 2013 from Dr Astrid James to the Press Complaints Commission).

The Deputy Editor goes on to state about my complaint to the Press Complaints Commission concerning the Comment: “We would like to reject this complaint in the strongest possible terms. We believe there are no inaccuracies….We have shared the complaint with Dr Bleijenberg and Dr Knoop and they stand by the content of their published Comment….They stand by their use of the term ‘recovery’….We stand by our publication of the Comment by Dr Bleijemberg and Dr Knoop, and have found no inaccuracy that warrants a correction. We hope that our response is clear”.

This is in stark contradiction to the email sent on 8th June 2011 by Zoe Mullan, Senior Editor at The Lancet, who confirmed about the Bleijenberg and Knoop Comment that it should be withdrawn: “Yes, I do think we should correct the Bleijenberg and Knoop Comment, since White et al explicitly state that recovery will be reported in a separate report. I will let you know when we have done this”. Despite Zoe Mullan’s assurance, it has not been corrected.

Such contradiction by The Lancet reflects badly on the editorial staff.

What you reported in The Lancet article was not “recovery” statistics but the number of participants who fell within your own (artificially low) definition of the “normal range” for fatigue and physical function.

In your letter published in The Lancet on 17th May 2011 you clarified that no recovery results had been published.

Why, then, did you approve publication of the Comment? That Comment said: “Both graded exercise therapy and cognitive behavioural therapy assume that recovery from chronic fatigue syndrome is possible and convey this hope more or less explicitly to patients….Have patients recovered after treatment? The answer depends on one’s definition of recovery(quoting the paper you co-authored with Bleijenberg and Knoop: Psychother Psychosom 2007:76:171-176). PACE used a strict criterion for recovery: a score on both fatigue and physical function within the range of the mean plus (or minus) one standard deviation of a healthy person’s score. In accordance with this criterion, the recovery rate of cognitive behavioural therapy and graded exercise therapy was about 30%....the PACE trial shows that recovery from chronic fatigue syndrome is possible”.

Bleijenberg himself regards an SF-36 score of 65 as representing severe impairment (BMJ: 2005 January 1; 330: (7481):14), yet in the Comment he implicitly accepts a score of 60 (five points worse) to equate with “recovery”.

As you approved the Comment before publication, was it not an omission on your part not to inform Bleijenberg and Knoop (and The Lancet) of their error?

The whole point is that PACE participants did not fulfil your “strict definition for recovery” (which you abandoned) and the SF-36 measure was not plus or minus one standard deviation of a healthy person’s score; you yourself conceded in your letter to The Lancet that you used the mean of an English adult population (not a working age population as you claimed in your Lancet report). This distinction is important because an English adult population includes elderly people and individuals with chronic illness so your comparison for recovery was not, as Bleijenberg and Knoop state, relative to a healthy person’s score. By not comparing with a healthy person’s score (but with the average that included elderly and the chronically sick), you increased the likelihood that PACE participants’ scores would reach your re-defined “normal range” on conclusion of the trial.

...
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Am I right in thinking that we should get the long-term PACE trial data this year? I've forgotten - should it include the long-term follow-up improvement rates? If so, it should be interesting. Anyone want to hazard a guess what the long-term improvement rates will be for CBT and GET?
 
Messages
13,774
Am I right in thinking that we should get the long-term PACE trial data this year? It should be interesting. Anyone want to hazard a guess what the long-term improvement rates will be for CBT and GET?

Anyone know if they're allowed on-going contact with participants, other than just sending out questionnaires?

eg: What about the patient that they pull out to provide anecdotal evidence of how wonderful CBT is?

There was a patient mentioned in the physiotherapy puff-piece who talked about now running their own disability advice business. Could they be 'supported' in that by those involved with PACE?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Ten Top Tips For Reporting On Clinical Trials
by Ruth Francis, Head of Communications for BioMed Central and Springer, UK.
Credit for list also given to: @oh_henry @bengoldacre @senseaboutsci @PaulGThorne

http://storify.com/NeuroWhoa/ten-top-tips-for-reporting-on-clinical-trials

Items #2, #3, #8 seem particularly relevant...

2/10 Is primary outcome reported in paper the same as primary outcome spec in protocol? If no report maybe deeply flawed

3/10 Look for other trials by co or group, or on treatment, on registries to see if it represents cherry picked finding

8/10 Be precise about people/patient who benefited – advanced disease, a particular form of a disease?
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
Do You Know What’s Good For You?

By Neuroskeptic | July 9, 2013 5:16 pm

This post draws on the results of the controversial PACE Trial (2011), which compared the effects of four different treatment regimes for chronic fatigue syndrome (CFS).

However, this post isn’t about CFS. What interests me about PACE is that it illuminates a general psychological point: the limited nature of self-knowledge.

Patients in PACE were randomized to get one of four treatments. One was called APT. People randomized to this therapy did no better, in terms of symptoms, than people assigned to the “do nothing in particular” control condition, SMC.

However, people on APT said they were much more satisfied with their treatment than did the ones on SMC (85% satisfied vs 50%).

Two other conditions, CBT and GET, were associated with better symptom outcomes than the other two. People in these groups were satisfied – but no more so than the people on APT, which, remember, was quite a bit worse.

Here’s the symptom scores: APT was close to the dotted line, meaning no effect:

So satisfaction was unrelated to efficacy – even though it was the very same people judging both. The main symptom outcomes were self-rated.

“How could anyone feel equally satisfied with a treatment that works and one that doesn’t work?”, you might ask. But no-one got the chance to do that, because no patient was in a position compare two treatments. They got one each.

So they had no independent yardstick against which to measure the treatment they got. All they had was their own mental yardstick: their perceptions and expectations of what a satisfactory treatment should do (and if it does less, it’s unsatisfactory).

People in the APT arm evidently had a different mental yardstick to those in the two more effective treatment conditions, because they were all equally satisfied despite different outcomes. Why that might be is another story.
We all have mental yardsticks as to what we ‘should’ feel, what is ‘normal’ as opposed to ‘too much’ or ‘too little’ in different situations.

They’re the barely-acknowledged foundation stone of modern psychiatry: psychiatrists use theirs to judge patients’ minds, and patients use theirs to judge their own.

But where do these yardsticks come from?

And should we trust them?
 
Messages
15,786
Neuroskeptic said:
“How could anyone feel equally satisfied with a treatment that works and one that doesn’t work?”, you might ask. But no-one got the chance to do that, because no patient was in a position compare two treatments. They got one each.

So they had no independent yardstick against which to measure the treatment they got. All they had was their own mental yardstick: their perceptions and expectations of what a satisfactory treatment should do (and if it does less, it’s unsatisfactory).]
Sounds like Neuroskeptic can't bother to read the actual recovery paper. Because it's clear in that one that all of the yardsticks are based entirely upon their perceptions - there were no objective outcomes.

Edit: despite that lack of objective outcomes, he seems to take the statement that CBT/GET patients did "better" at face value.
 

biophile

Places I'd rather be.
Messages
8,977
It appears that Neuroskeptic is questioning the dependence on subjective measures in general? In a world where it has basically become politically incorrect for some to significantly criticize the questionable methodology employed in CBT/GET research and claims of efficacy/safety, lest one risks being painted as an extremist for denying the evidence, after 3 years since the first paper was published, Neuroskeptic just barely manages to scratch the surface of the problems with PACE.

Other less subjective "yardsticks" included walking test distance and data on employment and welfare. These outcomes did not match the subjective improvements either. CBT/GET proponents have generally downplayed the importance of objective measures, especially when there has been a null result (think of Nijmegen CBT school and actigraphy).

Neuroskeptic does raise an interesting point, why is satisfaction higher in the (adjunctive) APT group than the SMC (alone) group despite no advantage? The APT group did receive more attention, about the same as the CBT and GET groups, which may have biased the satisfaction results for all 3 therapy groups irrespective of actual improvements?

For some reason this issue reminded me somewhat of Nijmegen CBT school: "Our findings suggest that cognitive behavioral interventions for CFS need to change the illness perception and beliefs of their patients in order to be effective." http://www.ncbi.nlm.nih.gov/pubmed/22469284

I did a quick search for Neuroskeptic and PACE CFS and found this Twitter message from 10th June 2012:

"Last year the PACE trial of treatments for chronic fatigue syndrome/ME created controversy. Patient group critiques it: http://evaluatingpace.phoenixrising.me/summary.html"
 
Messages
13,774
I didn't think that there was anything really wrong with the Neuroskeptic piece, other than the fact that he clearly hadn't done much reading around the topic, so was just focusing on one little bit of data that they found interesting. It probably is a bit misleading to do this, but I didn't think it was 'shitty-to-CFS-patients' in the way that a lot of stuff is.

edit: lol- Maybe my standards for science journalism around CFS are a bit low?
 
Messages
15,786
I didn't think that there was anything really wrong with the Neuroskeptic piece, other than the fact that he clearly hadn't done much reading around the topic, so was just focusing on one little bit of data that they found interesting. It probably is a bit misleading to do this, but I didn't think it was 'shitty-to-CFS-patients' in the way that a lot of stuff is.
He says "Two other conditions, CBT and GET, were associated with better symptom outcomes than the other two." He is taking this on face value, never questioning whether they actually had better symptom outcomes, and does not seem to recognize the problem with it being a subjective outcome itself.

He's making an unfounded assumption that (CFS) patients' perceptions are wrong compared to the presumed trustworthy answers on a different questionnaire. It's nonsensical to see that two sets of answers on different questionnaires don't match up, and assume that patients' perceptions about one topic are correct, but the ones on a different topic are faulty.

I suppose having my perceptions questioned based on crappy research is a bit of a sore spot for me :p And I'm REALLY tired of people not doing their homework before "analyzing" these sorts of things.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
He is, and it is both a good point and a positive step that people are actually talking about this...

Yes indeed. Methinks that the larger question is relating to judgement. Our own and other people's. What we base our own judgements on - does this help? if so in what way? - and what they use to interpret our responses and how they ask the questions I suppose:

They’re the barely-acknowledged foundation stone of modern psychiatry: psychiatrists use theirs to judge patients’ minds, and patients use theirs to judge their own.

But where do these yardsticks come from?

And should we trust them?

Fundamental questions that perhaps many patients especially don't ponder overly much. We think we 'know' when something helps and maybe don't need to know why. If we say 'it helps' the assessor ticks his box. If we say something and/or our behaviour indicates something to an assessor - upon what is he basing his interpretation?

'Yardsticks' are our own internal ones as much as the assessors. The question of helpfulness is too simplistic. We should be trying to better explain how something helps or how we believe it has helped (or not); and their interpretations should better allow for the less-simple answers.

Life isn't black and white and when dealing with matters pertaining to 'thought' and interpretation the questions and answers shouldn't be expected to fit neatly into a box. Focusing on 'helpfulness' is not the way to proceed.