• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

biophile

Places I'd rather be.
Messages
8,977
Hi Alex. I've had similar problems remembering a paper or the correct author. I think it happened recently but I'm having trouble remembering the details! Anyway, is this the full text you wanted?:

A population-based study of chronic fatigue syndrome (CFS) experienced in differing patient groups; An effort to replicate Vercoulen et al.'s model of CFS.

http://www.cfids-cab.org/cfs-inform/Subgroups/song.jason05.pdf
 

Dolphin

Senior Member
Messages
17,567
Is this what you are thinking of?

Song S, Jason LA. A population-based study of chronic fatigue syndrome (CFS)experienced in differing patient groups: An effort to replicate Vercoulen et al's model of CFS. JMent Health 2005, 4:277-289
DePaul studies are generally listed here:
http://condor.depaul.edu/ljason/cfs/

A lot of them aren't PubMed-listed so that's a better place to look for their studies I think.

Just to say that I know a list of papers is often not sufficient to help jog one's memory for the source of a point one recalls/half-recalls. My main point was to highlight the existence of the page.
 

Dolphin

Senior Member
Messages
17,567
Simon Wessely has worked out why people who have posted to this thread are unhappy with the PACE Trial: :rolleyes:


We now have two treatments that we can recommend with confidence to our patients.

However, the story does not quite end there.

Patient groups rejected the trial out of hand, and the internet was abuzz with abuse and allegations.

The main reason for this depressing reaction was the stigma that attaches to disorders perceived (rightly or wrongly) to be psychiatric in origin, whatever that means.

If one obtained identical results to the PACE trial, but this time with anti-viral drugs, the reaction would have been totally different.

This is exactly what did happen when a very small trial of a drug that modulates the immune system (and which has some nasty side effects) was greeted with acclaim from the same sources that tried to discredit the PACE trial, which tested interventions with an impeccable safety record.
==============================================
from:

http://www.foundation.org.uk/journal/pdf/fst_20_07.pdf

The Journal of the Foundation for Science and Technology
Volume 20, Number 7, December 2011

Health in mind and body

Simon Wessely
I started a thread on this at: http://forums.phoenixrising.me/show...entions-with-an-impeccable-safety-record-quot
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Well its good to see that SW is using evidence based conclusions on attributions by patients. :rolleyes: Seriously, this is just more spin. Cast aspersions on patients who disagree with the non-science, rather than addressing any of the methodological, logical, statistical or nomenclature flaws in their study. He would have you believe that it is only patients who are objecting to his claims, which is a nonsense. He is very much hoping that more people will ignore us and not realize that many doctors and researchers also disagree with his claims. Bye, Alex
 

Dolphin

Senior Member
Messages
17,567
Just clearing out some messages from one of my e-mail accounts and found this old note from 2009 - not sure if this was highlighted on the thread (a search for Lange shows nothing):
--------
I find it very strange that the participants would be told "positive"
information from another CBT trial in the middle of a trial comparing CBT to
other interventions.

http://www.pacetrial.org/docs/participantsnewsletter3.pdf

(From a Peter White report)

One of the most interesting studies, carried
out by Dr Floris P. de Lange and colleagues
in the Netherlands, showed that
cognitive behaviour therapy was associated
with an increase in grey matter of the
brain and this increase was associated with
improved cognitive function.

[I mentioned in the elsewhere, the change (which was only
12% of of the difference) could have happened with time (i.e. there was no
CFS control group so the same change could have occurred over 8 months
without a treatment, as some of the patients could have been getting better
anyway)]
 

Dolphin

Senior Member
Messages
17,567
(relevant?) Symptom diaries can worsen perception of pain

I put this aside at the time. Am unsure if it is of relevance.

I'm not sure if I ever read the APT manuals fully. But I wonder could this sort of study suggest reporting patterns be altered if APT patients were more focused on their symptoms.

To me, in some ways, I wonder how important reporting patterns are: what I am more interested in is the underlying biological processes i.e. if somebody reports a collection of symptoms at a certain level (say mean of 50) and then following using a diary, reports them at 60 (i.e. at a higher level) but this is just due to them being more conscious of their symptoms and that in general, because they are more conscious of them, they are pacing better, have less oxidative stress, etc., I would generally see that as a success. I suppose it all comes down to the question of objective measures again.

Symptom diaries can worsen perception of pain

From: Rheumatology Update
Date: 6 October 2010


by Tony James

Asking patients with chronic pain to complete a symptom diary may amplify
their perceptions of pain and ultimately do more harm than good, a study in
healthy volunteers has suggested. Canadian researchers recruited 35 female
university students who were free of any acute or chronic medical problems.
They were randomised to maintain a symptom diary for 14 days, or not keep a
diary. The diary group was asked to note each day whether they head any of
eight symptoms: headache, neck pain, back pain, fatigue, abdominal pain,
elbow pain, jaw pain, or numbness/tingling in the arms or legs. If so, they
rated the severity from 1 (minimal) to 10 (extremely severe). The control
group completed the check list only at baseline and at day 14. Diary-keepers
doubled their frequency of recalled symptoms by the end of the study, and
the severity of symptoms also increased. In contrast, there was no change in
the frequency of severity of symptoms in the control group. The researchers
said diaries were increasingly used to refine a diagnosis, suggest the need
for further evaluation or monitor treatment, especially in conditions such
as fibromyalgia, whiplash and chronic fatigue syndrome that were dominated
by subjective interpretations of the symptom burden. "The concern, however,
is that the benefits.have not been demonstrated," they said. "There does not
intuitively appear to be any benefit to increasing a patient's perception of
their symptom frequency or intensity, yet this occurs even in healthy
subjects," they concluded. "This study raises concerns about the potential
for a detrimental effect of diary use, where perceptions of symptoms may
affect both illness behaviour and quality of health."

-------

Effect of a Symptom Diary on Symptom Frequency and Intensity in Healthy
Subjects

Journal of Rheumatology 2010

Robert Ferrari and Anthony Science Russell

Department of Medicine and the Department of Rheumatic Diseases, University
of Alberta, Edmonton, Alberta, Canada.
R. Ferrari, MD, FRCPC, FACP, Department of Medicine, University of Alberta;
A.S. Russell, MB, BChir, FRCPC, Department of Rheumatic Diseases, University
of Alberta.

Abstract

Objective
Symptom and pain diaries are often recommended to or used by patients with
chronic pain disorders. Our objective was to examine the effect on recall of
symptoms after 14 days of daily symptom diary use in healthy subjects.

Methods
Subjects were randomly assigned to 1 of 2 groups: the diary group and the
control group. Both subject groups completed an initial symptom checklist
composed of headache, neck pain, back pain, fatigue, abdominal pain, elbow
pain, jaw pain, and numbness/tingling in arms or legs. Both groups indicated

their symptom frequency and their perceived average symptom severity in the
last 14 days. The diary group was asked then to examine the symptom
checklist daily for 14 days while the control group was not. After 2 weeks,
both groups then repeated the symptom checklist for recall of symptoms and
symptom severity.

Results
A total 35 of 40 initially recruited subjects completed all the
questionnaires, 18 in the diary group and 17 in the control group. At the
outset, both groups had similar frequencies and intensities of symptoms.
After 2 weeks of symptom diary use, diary group subjects had an increased
frequency (doubled) of recalled symptoms, and significantly increased
intensity of symptoms compared with the control group, which had not changed
its mean frequency or intensity of symptoms.

Conclusion
The use of a symptom diary for 2 weeks, even in generally healthy subjects,
results in increased recall of daily symptoms and increased perception of
symptom severity.
 

Dolphin

Senior Member
Messages
17,567
Just clearing out a lot of my inbox for 2011 (not sure I'll look at it again if there's too much left)

They use this paper to justify changes to their primary measures: http://www.ncbi.nlm.nih.gov/pubmed/19455540

It prompted three replies and then a rejoinder. Maybe they cited a controversial statistics (!) paper in order to justify dubious maneauvers? Anyone here likely to have a good enough understanding of statistic to comment?
Paper is available for free now at: http://statlab.bio5.org/foswiki/pub/Main/PapersForClassCPH685/Senn-measurement09.pdf (however looks like it might be a bit of work to read incl. some mathematics).
 

Dolphin

Senior Member
Messages
17,567
Thanks for the link, Dolphin! Something I find interesting is in the Adverse Events section (pages 65-66). Aside from being dead or in immediate danger of dying, the only other severe adverse event (SAE) requires 4 weeks of being incapacitated. This seems rather ridiculous, considering that specific crashes rarely last more than a couple weeks, and are likely to be recurring as soon as exercise is attempted again, making them a serious side-effect for people chronically disabled by them.

And the example for the related non-serious adverse event (the only alternative to having a SAE) is "Transient exacerbation of fatigue or pain ...which does not have significant impact upon function." Followed by a footnote that seems to be referring to "death" as a significant impact upon function :p

If I understand what I've read on here, not much data has been released regarding adverse events (or the details of the study in general). When/if it does come out, I bet it's going to 1) put everything that didn't cross the threshold for a SAE into the non-serious category, and 2) strongly imply that all non-serious adverse events are equivalent to having no significant impact on function.

Basically I think they included that "insignificant" example for that exact reason. The other non-serious adverse events are still significant (new mood/sleep/anxiety disorders or injury), but making the threshold for PEM insignificant means that there is no specific acknowledgment of the PEM crash. Either participants had PEM for a month, or their PEM will be put into the same category with "does not have a significant impact on fuction." I'd be very surprised if they make any distinction between "I felt a bit tired and sore after that walk" and "I was bedbound for 3.5 weeks."
Astute points, I think, Valentijn.

BTW, I think the information in the Lancet paper is all that we are going to get. There was a webappendix in the Lancet with extra info: http://download.thelancet.com/mmcs/...8eb460:-1073e46:1349ba6809c:2e951325463820969 (you might need to be signed in to see it - but free registrations gets free access to the paper and webappendix).
 

oceanblue

Guest
Messages
1,383
Location
UK
They use this paper to justify changes to their primary measures: http://www.ncbi.nlm.nih.gov/pubmed/19455540

It prompted three replies and then a rejoinder. Maybe they cited a controversial statistics (!) paper in order to justify dubious maneauvers? Anyone here likely to have a good enough understanding of statistic to comment?
Paper is available for free now at: http://statlab.bio5.org/foswiki/pub/Main/PapersForClassCPH685/Senn-measurement09.pdf (however looks like it might be a bit of work to read incl. some mathematics).
Given the scary maths, I suspect the way to unlock this paper will be to read the replies, which are behind a paywall. If PACE cite this paper again in future studies it might be worth trying to access the replies.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Given the scary maths, I suspect the way to unlock this paper will be to read the replies, which are behind a paywall. If PACE cite this paper again in future studies it might be worth trying to access the replies.

I have had a look at the paper and have been duly scared. It's like knowing all the notes but not being able to make out the tune (which is how my efforts to play music is going). Still, with a bit of work and a lot of time, I could get somewhere. As long as you are patient, it could be worthwhile. One thing that does cheer me up is that the article does complain that often the medics do the measurements and the statisticians do the sums, but that without either of them fully understanding what the other is doing, the process would not be reliable. We have made that point (in a less erudite way) in the PACE-analysis project.

I think that Anciendaze could be a better bet for understanding what is being said in the paper though.
 

biophile

Places I'd rather be.
Messages
8,977
PACE may not have seen trial data before redefining goalposts afterall

I've stated before that the authors may have seen the disappointing trial data before redefining their goalposts, but I've never been 100% convinced and after reexamining the issue I'm even less sure now. The goalposts were all described as post-hoc analyses, which may or may not suggest that the authors saw the trial data beforehand, it could just mean the goalposts were redefined during the changes to the protocol after the study was in progress but before seeing the trial data. Furthermore, these goalposts were listed under the "Statistical analysis" section of the 2011 Lancet paper which states that "The statistical analysis plan was finalised, including changes to the original protocol, and was approved by the trial steering committee and the data monitoring and ethics committee before outcome data were examined."

The reply to the FOI request sent by the ME Association states something similar under the "Primary outcomes" section:

As described in the paper published in the Lancet, changes were made to the analysis strategy after the protocol was submitted for publication. These changes were all made before data was examined and approved by the independent trial steering committee. This is normal practice in clinical trials. These included changing the scoring of the Chalder fatigue questionnaire from bimodal to Likert (continuous), in order to improve its sensitivity to change, and changing the primary outcomes from composite to simple measures to aid interpretability.

As explained in the published response to correspondence received by the Lancet (including your own published letter making a very similar point), the authors used a different population study to that mentioned in the protocol to derive the normal range scores for the SF36 physical function scale as they believed this to be the most representative study for the trial sample (Bowling et al, 1999).

http://www.meassociation.org.uk/wp-content/uploads/2011/06/FOI+from+Queen+Mary.pdf

Similar statements are made in White's unofficial rely to Hooper:

Clinically useful difference (page 30) - The figures of 7.4 and 6.9 come from unadjusted figures the adjusted difference between GET and SMC was 9.4, not 6.9, which exceeds the prespecified clinically useful difference. Comparisons with APT were pre-specified and were not introduced simply because the APT group had a lower mean. In addition, the comparisons were made when the study group were blinded to the trial arms, so these numbers were obtained before we knew which group was which.

Normal ranges - The primary analysis compared the mean differences in the primary outcome scores across treatment arms, which are in the paper. The normal range analysis was plainly stated as post hoc, given in response to a reviewers request. We give the results of the proportions with both primary outcomes within normal ranges, described a priori, using population derived anchors.

SF-36 scores (page 31) - The definition of a normal range for the SF36 in the paper is different from that given in the protocol for recovery. Firstly, being within a normal range is not necessarily the same as being recovered. Secondly, the normal range we gave in the paper was directly taken from a study of the most representative sample of the adult population of England (mean - 1 SD = 84 24 = 60). The threshold SF36 score given in the protocol for recovery (85) was an estimated mean (without a standard deviation) derived from several population studies. We are planning to publish a paper comparing proportions meeting various criteria for recovery or remission, so more results pertinent to this concern will be available in the future. We did however make a descriptive error in referring to the sample we referred to in the paper as a UK working age population, whereas it should have read English adult population, and have made this clear in our response to correspondence.

Fatigue measure (page 32) - We explained in the paper why we changed our scoring of the fatigue measure from bimodal to Likert scoring, in order to improve sensitivity to change to better test our hypotheses, and did this before outcome data were examined. This was included in our pre-specified analysis plan approved by the TSC.

http://www.meactionuk.org.uk/whitereply.htm

It would have been damning to nail the authors for redefining the goalposts after seeing the disappointing trial data, but we need evidence before making such an accusation as a matter of fact as opposed to being a mere possibility. There is still the (circumstantial) possibility that the changes were made after learning a few things from seeing the disappointing FINE Trial data. Of course, the post-hoc analyses themselves are flawed for reasons already discussed on this thread, and apparently they were "examined and approved by the independent trial steering committee"? It has also now occurred to me that White's statement, "the normal range analysis was plainly stated as post hoc", seems to be the excuse given for why there is overlap between the criteria for trial entry and the criteria for normal outcome.
 

Dolphin

Senior Member
Messages
17,567
I've stated before that the authors may have seen the disappointing trial data before redefining their goalposts, but I've never been 100% convinced and after reexamining the issue I'm even less sure now. The goalposts were all described as post-hoc analyses, which may or may not suggest that the authors saw the trial data beforehand, it could just mean the goalposts were redefined during the changes to the protocol after the study was in progress but before seeing the trial data. Furthermore, these goalposts were listed under the "Statistical analysis" section of the 2011 Lancet paper which states that "The statistical analysis plan was finalised, including changes to the original protocol, and was approved by the trial steering committee and the data monitoring and ethics committee before outcome data were examined."

The reply to the FOI request sent by the ME Association states something similar under the "Primary outcomes" section:



Similar statements are made in White's unofficial rely to Hooper:



It would have been damning to nail the authors for redefining the goalposts after seeing the disappointing trial data, but we need evidence before making such an accusation as a matter of fact as opposed to being a mere possibility. There is still the (circumstantial) possibility that the changes were made after learning a few things from seeing the disappointing FINE Trial data. Of course, the post-hoc analyses themselves are flawed for reasons already discussed on this thread, and apparently they were "examined and approved by the independent trial steering committee"? It has also now occurred to me that White's statement, "the normal range analysis was plainly stated as post hoc", seems to be the excuse given for why there is overlap between the criteria for trial entry and the criteria for normal outcome.
I haven't been convinced they saw the data in printed form before making these changes.

However, this is not like (most) blood test results in a trial (say) where one has no idea what the results will show until they are analysed. These were just questionnaires based on functioning, fatigue, etc. They can get a reasonable idea of the direction of such things in this particular case (as opposed to blood tests) i.e. that few were recovering by their original definition or even reaching the thresholds they had set in their primary outcome measures. Some of the authors were based in the centres the trials were taking place and could have get feedback in various direct and indirect ways from what I can see.
 

Sean

Senior Member
Messages
7,378
Would be very surprised if they did not get some serious hints from FINE about how PACE would turn out.

Ultimately, PACE and the underlying model will fail mostly on the fact after 20 years plus of testing, and quite forceful advocacy, they simply have not delivered a clear and strong therapeutic effect. At best the theoretical and practical value of that approach is minor, and further pursuing it is a highly questionable use of scarce resources, including patient goodwill.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Personally, I am not too concerned whether they changed their minds before or after seeing the results - it is the actual criteria themselves that are important. I always used to ask my students to do a trial run with any study so that they could assess difficulties before they did the full run. I could, for example, accept that between setting up the study and getting the final results, other studies appeared which gave better information to set the criteria. The problem arises when criteria have been watered down, as they have in the PACE trial.

To me it smacks of getting planning permission. First submit a reasonable set of plans, then, once they have been approved, submit additions and alterations in the hope that they will not be given the same thorough scrutiny. It is obvious that the amendments did not get the same scutiny because the overlap between entry criteria and "recovery" criteria was not noticed. The idea that changing from bimodal to Likert scoring improves sensitivity is also unproven: it sounds good, but does not need to be true. The questionnaire may only be appropriate with bimodal scoring - it may lack sufficient "accuracy" for anything more. I know that there are statistical tests for sensitivity and (can't think of the other word), but the results of our Chalder survey show it to be a very woolly measure. It's rather like kids at school calculating the length of the hypotenuse of a triangle, where the other two sides are 12 and 15 cm: it is very hard to explain to them why an answer of 19.209373cm suggests inappropriate inaccuracy.
 

Esther12

Senior Member
Messages
13,774
The nature of the changes is much more damning than when they happen to have been made.

They started claiming that a score of 65 was indicative of the severe and disabling fatigue required by Oxford, and a score of 85 would indicate recovery. They ended by claiming that those scoring 60 were 'back to normal'. That one happens to have been post-hoc, but I don't think it would have been any more acceptable were this not the case.

I do not see how they can honestly claim this:

the authors used a different population study to that mentioned in the protocol to derive the normal range scores for the SF36 physical function scale as they believed this to be the most representative study for the trial sample (Bowling et al, 1999).

I'd love to hear what led them to believe that. The patient group were all of working age and had been screened for a wide range of medical problems which would lead to disability - 25% of the Bowling population were aged over 65, and it included all those with serious disabilities. It's not a surprise that this 'normal' overlapped with the criteria for severe and disabling fatigue that was used at the start of the trial.
 

oceanblue

Guest
Messages
1,383
Location
UK
The nature of the changes is much more damning than when they happen to have been made.

They started claiming that a score of 65 was indicative of the severe and disabling fatigue required by Oxford, and a score of 85 would indicate recovery. They ended by claiming that those scoring 60 were 'back to normal'. That one happens to have been post-hoc, but I don't think it would have been any more acceptable were this not the case.

I do not see how they can honestly claim this:
the authors used a different population study to that mentioned in the protocol to derive the normal range scores for the SF36 physical function scale as they believed this to be the most representative study for the trial sample (Bowling et al, 1999).
I'd love to hear what led them to believe that. The patient group were all of working age and had been screened for a wide range of medical problems which would lead to disability - 25% of the Bowling population were aged over 65, and it included all those with serious disabilities. It's not a surprise that this 'normal' overlapped with the criteria for severe and disabling fatigue that was used at the start of the trial.
It's a good point that this most outrangeous change was post-hoc and not an approved change to the protocol. That quote you've included: was it from the paper or their reply to the letters?
 

Esther12

Senior Member
Messages
13,774
It's a good point that this most outrangeous change was post-hoc and not an approved change to the protocol. That quote you've included: was it from the paper or their reply to the letters?

The quote was from the reply to a FOI, posted in 1432.
 

biophile

Places I'd rather be.
Messages
8,977
oceanblue wrote: It's a good point that this most outrageous change was post-hoc and not an approved change to the protocol. That quote you've included: was it from the paper or their reply to the letters?

It seems to me now (post #1432) that the post-hoc changes were an approved change to the protocol, "post" to the original study design but before seeing the results data. If they weren't, this would be particularly bad, not just because the authors would have redefined the goalposts after seeing the data, but also because they would have given the dishonest impression these changes were approved before seeing data.
 

Dolphin

Senior Member
Messages
17,567