• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

"News" 8 Sep 2016: PACE trial team analyse main outcome measures according to the original protocol

Cinders66

Senior Member
Messages
494
Quite good to remember the science media expert review of the PACE follow up study - provided by Rona Moss-Morris

http://www.sciencemediacentre.org/e...nts-for-cfsme-and-accompanying-comment-piece/

you are here: science media centre > roundups for journalists > expert reaction to long-term follow-up study from the pace trial on rehabilitative treatments for cfs/me, and accompanying comment piece
OCTOBER 28, 2015
expert reaction to long-term follow-up study from the PACE trial on rehabilitative treatments for CFS/ME, and accompanying comment piece


A paper published in The Lancet Psychiatryreports results of a long-term follow-up study to the PACE trial for CFS/ME. The study has assessed the original trial participants’ health in the long-term, and asks whether their current state of health, two and a half years after entering the trial, has been affected by which treatment they received in the trial. These comments accompanied a briefing.



Prof. Rona Moss-Morris, Professor of Psychology as Applied to Medicine, King’s College London, said:

“I think this is a robust study with some limitations that the authors have been clear about. The original PACE trial published in 2011 showed that at one year people with CFS/ME who received either graded exercise therapy (GET) or cognitive behavioural therapy (CBT) in addition to standard medical care were significantly less fatigued than those who received standard care alone or those who received adapted pacing therapy. The authors concluded GET and CBT were moderately effective treatments for CFS. Now, moderately effective may not sound all that impressive until you consider that many of our commonly used pharmaceuticals for medical conditions have similar moderate treatment effects. When using pharmaceuticals as treatment, maintaining these effects may mean taking ongoing medicines. This study shows that even two years or more after treatment has completed, patients receiving GET and CBT sustain their clinical benefits. A small percentage of these patients accessed some further treatment, but even so, these sustained effects are impressive.

“Despite these impressive results, this isn’t time for complacency. Some patients do not benefit from the treatment. We need to do more to understand why. We also need to develop and tailor existing treatment to get larger effects. It is also important to note that the CBT and GET protocols used in PACE were developed specifically for CFS. They are not the same as CBT for depression and anxiety or the exercise training you may receive at a local gym. The therapies are based on a biopsychosocial understanding of CFS and the health care professionals in PACE received specific training and supervision in these approaches. This is an important note for commissioners as not all CBT and exercise therapies are equal. Specialist knowledge and competence in these therapies is needed to obtain these sustained treatment effects.”



‘Rehabilitative treatments for chronic fatigue syndrome: long-term follow-up from the PACE trial’ by Michael Sharpe et al. published in the Lancet Psychiatry on Wednesday 28 October 2015.

‘Chronic fatigue syndrome: what is it and how to treat?’ by Steven Moylan et al. published in the Lancet Psychiatry on Wednesday 28 October 2015.



Declared interests

Prof. Rona Moss-Morris: “Two authors of this study, Trudie Chalder and Kimberley Goldsmith, are colleagues of mine at King’s College London. I work with Trudie on other CFS work and with Kimberley on different work. I published a small study on GET in 2005. I am a National Advisor for NHS England for improving access to psychological therapies for long-term conditions and medically unexplained symptoms. Peter White (another author of the present study) is Chair of trial steering committee for an HTA NIHR-funded RCT I am working on with people with irritable bowel syndrome.”

*Note the Lancet article mentioned above refers throughout to chronic fatigue syndrome as neuropsychiatric.
 
Last edited:

Daisymay

Senior Member
Messages
754
OK this may well be a very stupid thing to say but here goes......there were statistical methods for defining "improvers" in the protocol and in the PACE paper and we can now clearly see the difference using the different methods, with far fewer people improving using the protocol method.

But what about what I'll call "worsers"?

I'm not thinking here of those who suffered some major adverse event, those were recorded, I'm thinking of those whose score worsened with CBT/GET to a comparable degree to how the "improvers" improved!

Why is there no comparable measure of those who got worse in the trial, measuring it in a statistically comparable, but reverse manner to the "improvers"?

If "improvers" are measured as a means of showing the efficacy of the CBT/GET why aren't "worsers" also measured so that the positive and negative effects of the treatment can be compared and an assessment made of risk versus benefit of CBT/GET?

In the MEAction article it says that using the protocol method of assessment, only 1 in 10 reported improvement with the addition of GET or CBT.

What about the other 9 out of 10?

Were they "worsers" or did their score stay static before and after treatment?
 

JohnCB

Immoderate
Messages
351
Location
England
OK this may well be a very stupid thing to say but here goes......there were statistical methods for defining "improvers" in the protocol and in the PACE paper and we can now clearly see the difference using the different methods, with far fewer people improving using the protocol method.

But what about what I'll call "worsers"?

Then two of us are stupid together then! I think it is a sensible question and one that has crossed my mind but I have never formulated into words. I'd like to know the answer too.

I'm not thinking here of those who suffered some major adverse event, those were recorded, I'm thinking of those whose score worsened with CBT/GET to a comparable degree to how the "improvers" improved!

Why is there no comparable measure of those who got worse in the trial, measuring it in a statistically comparable, but reverse manner to the "improvers"?

If "improvers" are measured as a means of showing the efficacy of the CBT/GET why aren't "worsers" also measured so that the positive and negative effects of the treatment can be compared and an assessment made of risk versus benefit of CBT/GET?

I think there is an issue with the questionnaires they use, what they call instruments. They are still just a list of questions. I think there is a kind of ceiling effect. People qualify as ill by answering questions, I've just made these up, like "Do you suffer fatigue, yes = 1, no = 0" and "Do you get pain, yes = 1 and no = 0". So you suffer pain and fatigue and you score 2 and 2 qualifies you to enter the trial. If you improve a bit and you feel either a bit less pain or a bit less fatigue, then you might change one of your answers to no. Then you get a total score of 1 instead. You have gone from 2 to 1 so you have improved. However if your pain and fatigue get worse, you still only score 1 on each question and get a total of 2. So now you are screaming with pain all day and you can't even get out of bed any more, but you still score a total of 2. According to their instruments you are unchanged.

Perhaps someone more knowledgeable would care to comment if this is a fair representation?

Edit: typos.
 
Last edited:

Daisymay

Senior Member
Messages
754
Then two of us are stupid together then!. I think it is a sensible question and one that has crossed my miind but I never formulates into words. I'd like to know the answer.



I think there is an issue with the questionnaires they use, what they call instruments. They are still just a list of questions. I think there is a kind of ceiling effect. People qualify as ill by answering questions, I've just made these up, like "Do you suffer fatigue, yes = 1, no = 0" and "Do you get pain, yes = 1 and no = 0". So you suffer pain and fatigue and you score 2 and 2 qualifies you to enter the trial. If you improve a bit and you feel either a bit less pain or a bit less fatigue, then you might change one of your answers to no. Then you get a toal score of 1 instead. YOu have gone from 2 to 1 so you have improved. However if your pain and fatigue get worse, you still only score 1 on each question and get a total of 2. So now you arescreaming with pain all day and you can't even get out of bed any more, but you still score a total of 2. According to their instruments you are unchanged.

Perhaps someone more knowledgeable would care to comment if this os a fair representation?

Ah, thanks JohnCB, I see what you mean, I remember now Dolphin talking about the ceiling effect.

How very convenient that the testing methodology doesn't allow for assessment of "worsers"!

Also if you only mention improvers in your analysis, people's attention is focused on that and they conveniently forget about those in the study who may have got worse.

And I presume this ceiling effect wasn't mentioned in the PACE paper, hence casual readers wouldn't have been aware of this.

It is surely a very serious flaw, when testing any treatment for any disease, if you can't or don't use methodology which can accurately assess how worse patients might have got with that treatment.

Shouldn't this be a red flag to any peer reviewer?

But then if a paper is peer reviewed by those who use the same flawed methodology......
 
Messages
13,774
I expect that hamrs from CBT/GET are likely to be pretty minimal compared to what people have experience outside of PACE.

Therapists were being recorded and assessed, they knew that this was a trial paying more attention to 'harms' than most... I know that there have been individuals who reported worsening of their condition during their involvement in PACE, but the data we have so far indicates that this was maybe worse in APT than CBT/GET, which were no worse than SMC alone.

I doubt that the data from PACE is going to show a significant problem with CBT/GET causing harm.
 

Daisymay

Senior Member
Messages
754
Esther12 said:
I doubt that the data from PACE is going to show a significant problem with CBT/GET causing harm.

And:

My guess is any significant harm would be in the gaps in the data. i.e. people who dropped out and hence won't show up. I seem to remember the dropout rate was higher for GET.

By worsers I'm meaning more those who got worse but not to the degree of dropping out or being classed as adverse events.

Those whose health and ability to function worsened to a greater or lesser degree with CBT/GET.

That doesn't seem to have been adequately assessed?
 
Messages
13,774
I still reckon we're unlikely to see much evidence of CBT/GET being associated with even small declines in questionnaire scores. Problems with bias mean we're less likely to see that with CBT/GET, and I think that they were delivered in an unusual setting which would be likely to mean that they were less likely to cause any real harm.
 
Messages
2,158
How very convenient that the testing methodology doesn't allow for assessment of "worsers"!

I think it would be possible to work out from the data how many people who completed the study got worse on the two principal measures.

As I understand it SF-36 asks questions about physical capability that range from something like can you walk 10 metres to can your run round the block (that's not the questions they asked, but it was, as I remember when I looked at it, a set of activities requiring increasingly more stamina). Scores can range from 0 where you can't do anything to 100 where you're fully able bodied.

The group averages at the baseline and after 1 year were:

APT 37.2 ; 45.9
SMC 39.2 ; 50.8
CBT 39.0 ; 58.2
GET 36.7 ; 57.7

The standard deviation of each of these figures were in the mid teens for the baseline and mid 20's at 1 year.
This increase in standard deviation means the figures were more spread out at 1 year.

Just as they defined a level they called improvement and a level they called recovery, so they could have set a level for getting worse if they chose to do so, and there's no reason why we shouldn't.

The other principal measure they used, the Chalder fatigue scale, seems from what I've seen of it, to be complete nonsense - a set of vague statements which were scored in a ridiculous way that, judging from commentary I saw on the FINE trial, many patients misunderstood. I wouldn't give it the time of day!

As has been said, there is a ceiling effect problem with this scale, which makes it so much nonsense. If you already said you were badly effected by a particular criterion, it was impossible to say you were now even worse affected, and if there was another criterion that didn't apply to you, eg one I remember seeing when I looked at it was 'I have trouble starting things' My response is, I have trouble continuing things, not starting them! I'm forever wanting to do something and running out of steam...

It does, however, provide a scale from 0 to 33 or 0 to 11 depending which interpretation they use, again levels could be set. The means in all groups went from high 20's to low 20's over the year on the 33 point scale, and they set 18 as the cut off point for 'improvement' I think. I guess we could set something like 'increased by 3 points on the 33 point scale, or 1 point on the 11 point version as getting worse (higher means worse on this measure).

But of course this doesn't begin to deal with drop outs etc.

I think the main and useful reason for counting how many got worse would be to counter the argument that 20% improvers with CBT/GET is better than 10% improvers with SMC/APT as White et al argue in their latest missive.

If we can show 10% or more got worse that would be a powerful argument against the pathetic 'improvement' levels.

I suspect the number who 'recovered' on the protocol criteria was miniscule. In a group with a SF-36 mean of 57, I can't see many getting over 85 and also meeting the other criteria.

Not sure any of this is any use, since we don't have the figures and I assume Alem Mathees will find a statistician, if he isn't one himself, to do the necessary analysis.

Sorry, I'm rambling, I think I'm becoming obsessed. I can't imagine anyone wanting to read these ramblings, but it's got it out of my system.

Till next time!
 

Dolphin

Senior Member
Messages
17,567
Then two of us are stupid together then! I think it is a sensible question and one that has crossed my mind but I have never formulated into words. I'd like to know the answer too.



I think there is an issue with the questionnaires they use, what they call instruments. They are still just a list of questions. I think there is a kind of ceiling effect. People qualify as ill by answering questions, I've just made these up, like "Do you suffer fatigue, yes = 1, no = 0" and "Do you get pain, yes = 1 and no = 0". So you suffer pain and fatigue and you score 2 and 2 qualifies you to enter the trial. If you improve a bit and you feel either a bit less pain or a bit less fatigue, then you might change one of your answers to no. Then you get a total score of 1 instead. You have gone from 2 to 1 so you have improved. However if your pain and fatigue get worse, you still only score 1 on each question and get a total of 2. So now you are screaming with pain all day and you can't even get out of bed any more, but you still score a total of 2. According to their instruments you are unchanged.

Perhaps someone more knowledgeable would care to comment if this is a fair representation?

Edit: typos.

It's a very good point. It is explored in these 2 papers. The title of the 2nd one doesn't get across what it is about that well I think.

Fatigue in Myalgic Encephalomyelitis
http://iacfsme.org/ME-CFS-Primer-Education/Bulletins/2008/Fatigue-in-Myalgic-Encephalomyelitis.aspx

Identification of ambiguities in the 1994 chronic fatigue syndrome research case definition and recommendations for resolution
http://bmchealthservres.biomedcentral.com/articles/10.1186/1472-6963-5-37
 

Daisymay

Senior Member
Messages
754
I think the main and useful reason for counting how many got worse would be to counter the argument that 20% improvers with CBT/GET is better than 10% improvers with SMC/APT as White et al argue in their latest missive.

If we can show 10% or more got worse that would be a powerful argument against the pathetic 'improvement' levels.

Sorry, I'm rambling, I think I'm becoming obsessed. I can't imagine anyone wanting to read these ramblings, but it's got it out of my system.

Till next time!

Exactly, we need to know numbers who improved versus those who worsened. People surely have a right to know this before they undertake treatment with CB/GET?

Doctors surely should know this before they prescribe these treatments?

Thanks for your rambling, from a fellow rambler!
 

Dolphin

Senior Member
Messages
17,567
I expect that hamrs from CBT/GET are likely to be pretty minimal compared to what people have experience outside of PACE.

Therapists were being recorded and assessed, they knew that this was a trial paying more attention to 'harms' than most... I know that there have been individuals who reported worsening of their condition during their involvement in PACE, but the data we have so far indicates that this was maybe worse in APT than CBT/GET, which were no worse than SMC alone.

I doubt that the data from PACE is going to show a significant problem with CBT/GET causing harm.

Esther12 is probably referring to the SF-36 PF data from this paper:
Adverse events and deterioration reported by participants in the PACE trial of therapies for chronic fatigue syndrome
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4065570/

Phoenix Rising discussion thread:
http://forums.phoenixrising.me/index.php?posts/456896/
 

Tuha

Senior Member
Messages
638
First I thought that those psychiatrists are incompetent and they had probably many patients with different diseases in PACE study. Now I am persuaded they had to know very well what they are doing and they manipulated the study results with the intention to have financial benefits - otherwise it doesnt make sense for me. What I dont understand is the false solidarity between psychiatrists and the behaving of QMUL/ Lancet.

We should ask the financial compensation at least from the psychiatrists community, QMUL, british governement. And this money we could use to support the biological ME research. If I would be psychiatrist I would be ashamed and I would donate to ME research and try to do a campaign to raise money. But this is certainly more science fiction