• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

Dolphin

Senior Member
Messages
17,567
Authors' claim that data analysis strategy drawn up before knowledge of the data

The authors try to make out that they changed how they were going to analyse the data before they had knowledge of it.

If these were blood results, they might be able to claim this (and others might be used to such scenarios).

However, they wouldn't have started to analyse the data till early 2010 so probably drew this up in 2009 - certainly the protocol paper was published in 2007 so it was after this. The trial was running from around 2005 or early 2006.

There was no rule that I know of that they didn't talk with the therapists about how things were going in general. Also, specifically I think the Centre leader (not sure if that is the correct title) was called in to deal with some adverse events (I think Peter White held that title in one place and Trudie Chalder in another - not 100% sure of that). There were two centres in Barts, one in Kings, etc. I think the CBT and GET therapists would have a good idea how things were going and I'm sure there were plenty of ways, without breaking any rules, that the people running the trial could get an idea how the trial was going. Similarly the SMC doctors, including lots of trainee psychiatrists, would know how things were going for the CGI etc and I think feedback formally or informally could easily get passed to people running the trial. I'd think have to think about it more about some of the details. But basically I think they could easily have picked up that there weren't large numbers in recovery or with large improvements.

And that was the big change in the statistical plan - the levels that required much higher levels of improvement have tended to disappear.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I think it's more likely "clinically important treatment benefit" refers to the following:

A clinically useful difference between the means of
the primary outcomes was defi ned as 05 of the SD of
these measures at baseline,31 equating to 2 points for
Chalder fatigue questionnaire and 8 points for short
form-36. A secondary post-hoc analysis compared the
proportions of participants who had improved between
baseline and 52 weeks by 2 or more points of the Chalder
fatigue questionnaire, 8 or more points of the short
form-36, and improved on both.

64 (42%) of 153 participants in the APT group improved
by at least 2 points for fatigue and at least 8 points for
physical function at 52 weeks, compared with 87 (59%) of
148 participants for CBT, 94 (61%) of 154 participants for
GET, and 68 (45%) of 152 participants for SMC. More
participants improved after CBT compared with APT
(p=00033) or SMC (p=00149), and more improved with
GET compared with APT (p=00008) or SMC (p=00043);
APT did not diff er from SMC (p=061; webappendix p 2).
So 61% (GET) - 45% (SMC) = 16%

59% (CBT) - 45% (SMC) = 15%.

Ah yes, that makes sense now... Thanks for that Dolphin.

Would it be correct to say that both CBT and GET were responsible for clinically significant improvements in an additional 15% of participants, over and above the number of participants who improved due to SMC alone?
 

Sean

Senior Member
Messages
7,378
"What this trial isn't able to answer is how much better are these treatments than really not having very much treatment at all."

Michael Sharpe
That's gonna come back to bite them. Seriously limits the claims they can make based on the PACE results.


Besides which, their version of Specialist Medical Care didn't seem too specialist or extensive to me, it was about what the average conscientious GP would provide and which most patients would normally receive (I hope), and it probably should really be described as Standard Medical Care (ie generic background medical support).

So it could be argued that effectively there was a 'no treatment' arm (no specific treatment), and that we do have a fair idea of how effective these treatments were compared to no specific treatment, and the answer according to the PACE trial is: not particularly, (and even that much is uncertain given the paucity of objective outcome measures, including for post-exertional effects).
 

oceanblue

Guest
Messages
1,383
Location
UK
The authors try to make out that they changed how they were going to analyse the data before they had knowledge of it.

There was no rule that I know of that they didn't talk with the therapists about how things were going in general. Also, specifically I think the Centre leader (not sure if that is the correct title) was called in to deal with some adverse events (I think Peter White held that title in one place and Trudie Chalder in another - not 100% sure of that). There were two centres in Barts, one in Kings, etc. I think the CBT and GET therapists would have a good idea how things were going and I'm sure there were plenty of ways, without breaking any rules, that the people running the trial could get an idea how the trial was going. Similarly the SMC doctors, including lots of trainee psychiatrists, would know how things were going for the CGI etc and I think feedback formally or informally could easily get passed to people running the trial. I'd think have to think about it more about some of the details. But basically I think they could easily have picked up that there weren't large numbers in recovery or with large improvements.

And that was the big change in the statistical plan - the levels that required much higher levels of improvement have tended to disappear.
My guess is that there are some rules/conventions that say they can't site down and look at data available to date during the trial, but in practice it must be impossible for them not to have a feel for how the study is going without them needing to explicitly try to find out. So I'd bet they knew the results weren't terrific, especially compared with the SMC group.

I'd love to know how the timing of the PACE decision to change the analyis plan fitted in with the FINE trial getting sight of their data. I'm sure the PACE authors would have been tipped off by the FINE authors about the disastrous FINE results, which I guess would have been in 2009. My guess is that, in conjunction with word that the PACE trial was not producing the kind of recovery rates dreamed of by the PACE authors, triggered the desire to change the analysis plan.
 

Dolphin

Senior Member
Messages
17,567
My guess is that there are some rules/conventions that say they can't site down and look at data available to date during the trial, but in practice it must be impossible for them not to have a feel for how the study is going without them needing to explicitly try to find out. So I'd bet they knew the results weren't terrific, especially compared with the SMC group.

I'd love to know how the timing of the PACE decision to change the analyis plan fitted in with the FINE trial getting sight of their data. I'm sure the PACE authors would have been tipped off by the FINE authors about the disastrous FINE results, which I guess would have been in 2009. My guess is that, in conjunction with word that the PACE trial was not producing the kind of recovery rates dreamed of by the PACE authors, triggered the desire to change the analysis plan.
Yes.

Here's what they said in their recent letter:
Changes to the original published protocol were made to improve either recruitment or interpretability, such as changing the proposed composite primary outcomes to single continuous scores. The analysis was guided by a Statistical Analysis Strategy (which we intend to publish), which was completed before analysis of outcome data, and which was much more detailed than the plan in the protocol; this is now conventional in the conduct of clinical trials.

and from the main paper:
Before outcome
data were examined, we changed the original bimodal
scoring of the Chalder fatigue questionnaire (range 011)
to Likert scoring to more sensitively test our hypotheses of
effectiveness. The two primary outcome measures15,16 are
valid and reliable and have been used in previous trials.47
These secondary outcomes were a subset of those specified
in the protocol, selected in the statistical analysis plan as
most relevant to this report.
The statistical analysis plan was finalised, including changes
to the original protocol, and was approved by the trial
steering committee and the data monitoring and ethics
committee before outcome data were examined.

We used continuous scores for primary outcomes to
allow a more straightforward interpretation of the individual
outcomes, instead of the originally planned
composite measures (50% change or meeting a
threshold score).10,30
The statistical analysis plan was written by
the analysis strategy group and approved by the trial steering committee
and data monitoring and ethics committee before the analysis was started.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
You might find my PACE trial criticisms Wiki page useful

Indeed I did - thanks!

I found the '6 minute walking distance' graph a particularly good illustration. Pictures like this are so valuable for illustrating the reality; no written exposition can convey the proper understanding of these statistics as well as this.

What jumped out for me, is that the differences between the baseline walking distances are almost as big as the difference between the GET 'improvement' and the other arms! Please correct me if I'm wrong, but weren't the participants supposed to be assigned to the 4 treatment groups randomly, and without reference to their level of disability? So that level of difference in baseline scores is in some sense a measure of statistical insignificance, or randomness?

The other thing that jumps off the page is the clearly visible finding that the size of the improvement in the CBT, APT and SMC groups is just the same. So there's no difference at all here between these 3 groups, and thus there's no better performance for CBT vs APT on this test. Only GET - not CBT - delivered a greater increase in the distance walked, and because of the very nature of GET that really doesn't mean anything: the greater distance walked is, of course, no indication that the level post-exertional malaise experienced 24-48 hours after the walking test was any less for this group.

The PACE trial results really have changed my view of the effectiveness of CBT and GET. I would previously have said something like this: They probably do deliver a small but significant benefit, and conceivably even a large one, for a very small proportion of people diagnosed with ME/CFS. But the trial results themselves are so spectacularly poor, even after the multiple bias-inducing flaws in the study and the systematic exclusion of people with ME, that I'm obliged to change my view. These results have convinced me that these therapies are almost completely useless, and perhaps 100% completely useless, for pretty much everyone with a diagnosis of ME/CFS, by whatever criteria they were diagnosed.

It's just extraordinary that such pathetic results, so poor as to change my opinion and convince me of the complete uselessness of these therapies, have been held up as if they were a success.
 

Enid

Senior Member
Messages
3,309
Location
UK
Quite agree Mark and frankly CBT practised locally here dangerous - all those who refused to accept their "depression" luckily escaped (and had to fight off the coercion of the times).
 

Sean

Senior Member
Messages
7,378
Excellent graph. Would be good to have another one in the same style, immediately below it, showing the levelling off of any therapeutic gains after the first 12 weeks of treatment ie how patients hit a low ceiling and do not improve any further.
 

Dolphin

Senior Member
Messages
17,567
Given it was fasttracked, does it mean it got awkward reviews from 1+ other journals?

The PACE Trial was fast-tracked by the Lancet. Given the four week turnaround mentioned below, it looks like it was submitted in January (it was published online on February 18).

Back in early August (2010), somebody (patient) mentioned this to me in an E-mail:
------
PDW took 6 months off for that [PACE Trial] but is now back in his office so presumably PACE has been submitted now or is close to it.
------
We already knew the PACE Trial was close of course from the BACME Conference (Oct 13-14) entry:
----
3.30 4.30
Professor Peter White
St Bartholomew's Hospital London
"PACE trial: so near yet so far"
(If outcome results are not yet published, Peter White will present the design, progress and baseline data from the trial)
-------
Also: http://www.bartscfsme.org/Documents/PROGRAMME 291110.pdf

To celebrate 25 years of the Barts CFS/ME Service
LMDT Training Day
29th November 2010
11.45am
PACE trial: Is knowledge more useful than belief? (Professor White will only give outcome results if the main paper has been published) Professor Peter White Professor of Psychological Medicine
Back in October, I wrote: "this was only announced in the last few weeks as far as I know so think he still expects results to be published soon."
------
I think (but am not definite) that there was one or more other talk that was planned that didn't happen before it was finally published (as I kept thinking it was going to be published before these talks).

I wonder could they have submitted it somewhere else and they didn't like the reviewers comments/suggestions so decided to submit it to a different journal (i.e. the Lancet)??
---------------------------



http://www.meactionuk.org.uk/Comments-on-PDW-letter-re-PACE.htm

Professor Malcolm Hoopers Detailed Response to Professor Peter Whites letter to Dr Richard Horton about his complaint re: the PACE Trial articles published in The Lancet

28th May 2011

Fast track publication It is not for us to comment on the editorial practices of a highly respected international journal: the concern is why this research came to be fast-tracked. On 21st March 2011 the executive editor at The Lancet who was responsible for publishing the PACE Trial article confirmed that it was fast tracked at the specific request of Professor Peter White. When Ghali et al examined the fast-tack process, they identified the main justifying criteria as being (i) importance to clinical practice; (ii) importance from a public health perspective; (iii) contribution to advancement of medical knowledge; (iv) ease of applicability in medical practice and (v) potential impact on health outcomes (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC102352/). The PACE Trial report does not appear to meet any of these criteria. The results were predictable (though slightly worse) compared with previous studies of CBT and GET and the NICE Guidelines remain unchanged, so there appears to have been no valid reason for fast-tracking the PACE results. Why, then, did Professor White require it to be fast-tracked and why did The Lancet agree? Perhaps it is coincidence that the editor responsible is also the Head of the Fast-Track Team.


It is notable that The Lancet has made no attempt to address this point in response to Professor Hoopers complaint and has provided no explanation for agreeing to fast-tack the article, other than confirming that the executive editor took Peter White on trust.



The Lancets Information for Authors states: For research papers that are judged to warrant fast dissemination, which will usually be randomised controlled trials (despite its title in the documents, the PACE Trial was not a controlled trial), The Lancet will publish a peer-reviewed manuscript within four weeks of receipt.



Given that the PACE Trial article was fast-tracked, the question arose as to whether or not there had been sufficient time for scrupulous checking of the data by The Lancets own statisticians. For the avoidance of doubt, when on 29th March 2011 the executive editor responsible for the publication of the PACE Trial articles was asked how The Lancets statisticians could have let such conflicting interpretation of the data be published in a journal of its reputation, he confirmed that he had taken Peter White on trust, saying (verbatim): We can only do what we can do. We have to take things on trust. We dont get the statisticians to go round and check every calculation thats been done. Its not up to the statisticians to advise on all the adding up.



However, when on 31st March 2011 this issue was raised with a different executive editor, he was astounded to hear that the executive editor responsible for the publication of the PACE Trial article had acknowledged that it had not been rigorously checked by The Lancets own statisticians before publication; he said that all studies to be published go for scrutiny by the journals own statisticians and that he himself had set up this process in 1990. It is thus not clear how meticulously The Lancets statisticians checked the data before publication.



They certainly did not pick up (alternatively they were not concerned about) the fact that on the SF-36 physical function score, it was possible for a participant to have a fatigue rating that was both normal and abnormal depending on which of the Investigators various definitions was applied. Indeed, identical responses could both qualify a person as sufficiently fatigued for entry to the PACE trial and later allow them to be deemed to have normal levels of fatigue (normal meaning as defined by the Investigators themselves, which does not equate to recovered). Whats more, as with physical function, it would be possible for a person to record a poorer score on the CFQ (Chalder Fatigue Questionnaire) on completion of the trial than at the outset, yet still be deemed to have attained normality on this primary outcome measure. It cannot be acceptable to describe PACE participants as having normal levels of fatigue and physical function when they could simultaneously be sufficiently disabled -- as judged by their levels of fatigue and physical function -- to have qualified for entry into the PACE Trial in the first place.



It remains Professor Hoopers view that it is astonishing that such a manifest contradiction survived The Lancets supposedly rigorous peer review process, about which Richard Horton stated on 18th April 20011 on Australias ABC Radio National: the paper went through peer review very successfully, its been through endless rounds of peer review and ethical review so it was a very easy paper for us to publish (see below).



Quite certainly, those appointed by The Lancet to conduct this supposedly rigorous peer-review process failed to be concerned about the highly misleading associated Comment by Bleijenberg and Knoop published in The Lancet. Despite the absence of any reference to recovery in the PACE Trial article itself, Bleijenberg and Knoop lean heavily on this concept, stating: "PACE used a strict criterion for recovery: a score on both fatigue and physical function within the range of the mean plus (or minus) one standard deviation of a healthy persons score. In accordance with this criterion, the recovery rate (sic) of cognitive behaviour therapy and graded exercise therapy was about 30%". This is plainly wrong. Despite their stated intention to do so, White et al did not report the number of participants who recovered, yet The Lancets peer-reviewers permitted such a blatant error to be published and hence to be cited unchallenged in the literature in the future. Indeed, in 2000 the UKs leading medical statistician, Martin Bland, (then at St Georges Hospital Medical School, London, but now Professor of Health Statistics, University of York) pointed out significant statistical errors in a paper by Simon Wessely and Trudie Chalder published in the BMJ; Wessely attempted to absolve himself from any blame but Bland was robust: Potentially incorrect conclusions, based on faulty analysis, should not be allowed to remain in the literature to be cited uncritically by others (Fatigue and psychological distress. BMJ: 19th February 2000:320:515-516). This is exactly the situation that now pertains at The Lancet, about which its senior editors appear entirely unconcerned.
 

Dolphin

Senior Member
Messages
17,567
PACE papers planned

http://www.meactionuk.org.uk/whitereply.htm

We are planning to publish a paper comparing proportions meeting various criteria for recovery or remission, so more results pertinent to this concern will be available in the future.
So looks like they are probably going to turn the focus away from the definition of recovery in the PACE Trial protocol paper, which oceanblue estimated in a post (link=?) probably resulted in recovery rates of under 10% for CBT and GET.

and

Future papers that will include these additional measures are in preparation including reports of economic outcomes, different definitions of recovery and remission, mediators and moderators, and long-term follow up.
 

Dolphin

Senior Member
Messages
17,567
PDW: when it suits me, values at 24 weeks are not relevant, when it suits me,they are

PDW: when it suits me, values at 24 weeks are not relevant, when it suits me, they are - this seems to be what he is saying in the following extract:

Overlap in confidence intervals at 24 weeks is not relevant as the pre-specified primary end-point was 52 weeks and our primary analysis used data from all follow up times. Analyses were guided by a pre-specified analysis plan, which we plan to publish. We report both unadjusted results and adjusted results in our models. Figure 2 shows unadjusted differences. The final results, shown in figure 3, are adjusted for baseline value of the outcome, amongst other things. The final results are not directly comparable to a simple comparison because they incorporate outcomes from all time points, adjust for stratification factors and baseline values (recommended approaches), and for clustering within therapists. (source: http://www.meactionuk.org.uk/whitereply.htm)

I presume that last sentence refers to Figure 2 because it doesnt make sense regarding Figure 3.

So the p-values in Figure 2 look to be whether the graph is sufficiently different from the total graph using all time points.

It is interesting how in the first part of this little paragraph, its the values at 52 weeks that are important: Overlap in confidence intervals at 24 weeks is not relevant as the pre-specified primary end-point was 52 weeks and our primary analysis used data from all follow up times.
However, when it comes to Figure 2, where there is a lot of overlap in the confidence intervals in 2F and 2G at 52 weeks suggesting the values would not be statistically different, the p-value now incorporates the values at 12 weeks and 24 weeks.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Is anyone aware of any info that we could use to challenge the use of the word 'moderate', for 'moderately effective'?

I've done a quick search for other research studies that use the term 'moderately effective' and, from what i've seen so far, the usage seems to vary to indicate a change anywhere between 7% to 50%.

Or any other ideas about how it might be challenged?
 

Dolphin

Senior Member
Messages
17,567
Is anyone aware of any info that we could use to challenge the use of the word 'moderate', for 'moderately effective'?

I've done a quick search for other research studies that use the term 'moderately effective' and, from what i've seen so far, the usage seems to vary to indicate a change anywhere between 7% to 50%.
For what it's worth, this unpublished letter: http://forums.phoenixrising.me/show...-by-the-Lancet&p=173229&viewfull=1#post173229 challenged it as did this published one: http://forums.phoenixrising.me/show...and-editorial)&p=179768&viewfull=1#post179768
 

oceanblue

Guest
Messages
1,383
Location
UK
(from Peter White reply to Richard Holgate re Malcom Hooper's complaint)

"We are planning to publish a paper comparing proportions meeting various criteria for recovery or remission, so more results pertinent to this concern will be available in the future."

and

"Future papers that will include these additional measures are in preparation including reports of economic outcomes, different definitions of recovery and remission, mediators and moderators, and long-term follow up."

So looks like they are probably going to turn the focus away from the definition of recovery in the PACE Trial protocol paper, which oceanblue estimated in a post (link=?) probably resulted in recovery rates of under 10% for CBT and GET.

I think this is being driven by the fact they are obliged to a publish cost-benefit analysis which is likely to have a big impact on clinical practice. Small gains for 1 in 7-8 patients for a therapy costing around 1,000 won't cut it. But they can come up with huge numbers attached to the value of 'recovery', which might make the expenditure look worthwhile if they can boost the 'recovery' figures a little. As things stand, a lot of CBT and GET programmes in the UK might not appear economically worthwhile to a cash-strapped NHS.
 

Dolphin

Senior Member
Messages
17,567
I was just looking up the 6 Minute Walking Test and came across this one which has figures which could be easily quoted. It also gives the individual scores for people which I always find a bit interesting (we didn't get it with the PACE Trial for example):

Full free text:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1339640/pdf/bmjcred00224-0015.pdf

Six minute walking test for assessing exercise capacity in chronic heart failure.

Br Med J (Clin Res Ed). 1986 March 8; 292(6521): 653655.

D P Lipkin, A J Scriven, T Crake, and P A Poole-Wilson

Twenty six patients, mean age 58 years (range 36-68), with stable chronic heart failure, New York Heart Association class II-III, and 10 normal subjects of a similar age range were studied.

Exercise capacity was assessed by determining oxygen consumption reached during a maximal treadmill exercise test and by measuring the distance each patient walked in six minutes.

There were significant differences in the distance walked in six minutes between normal subjects, patients with heart failure, class II, and those with class III heart failure (683 m, 558 m, and 402 m, respectively (p less than 0.003)).

The relation between maximal oxygen consumption and the distance walked in six minutes was curvilinear; thus the distance walked varied considerably in those with a low maximal oxygen consumption but varied little in patients and normal subjects with a high maximal oxygen consumption.

All subjects preferred performing the six minute walking test to the treadmill exercise test, considering it to be more closely related to their daily physical activity.

The six minute test is a simple objective guide to disability in patients with chronic heart failure and could be of particular value in assessing patients with severe heart failure but less useful in assessing patients with mild heart failure.
 

Dolphin

Senior Member
Messages
17,567
This is the 1000th post

I thought I'd write an easy-to-read message for the 1000th message.

Also, a bit of pacing advice: if you have just started reading this thread and have got to here, it might be time to have a little break. ;)