• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Andrew Gelman The PACE trial and the problems with discrete, yes/no thinking

Daisymay

Senior Member
Messages
754
Ugh - just saw the stupid comments from LTR. I bet they are from a patient rather than from someone deliberately trying to make patients look bad too.

Or is it from a troll? We have a blog, triggered by a email from Wessely, then we have this person making these comments, conveniently fulfilling the psychiatrists propaganda re ME/CFS patients.
 

worldbackwards

Senior Member
Messages
2,051
Idiot commenter said:
We are out of strategy time and in reality have been.
This I find baffling. Partly because we're finally getting somewhere on a number of fronts, but also because it's such a senseless attitude - "We're out of strategy time, so I better start ranting senselessly like a drunk on a bus at the man who half agrees with us, until presumably he doesn't any longer." That ought to sort things out.
 

Dolphin

Senior Member
Messages
17,567
Good comment, I think:


Placeboid says:
January 13, 2016 at 11:48 am
It is true that the PACE trial has some strengths of a good quality RCT, but Wessely glossed over some major weaknesses in his Mental Elf article, which were quoted here in the current blog.

1) If blinding did not really matter in RCTs, drug trials would not bother with it as much as they do.

Wessely suggests: “One therapy was rated beforehand by patients as being less likely to be helpful, but that treatment was CBT. In the event, CBT came out as one of the two treatments that did perform better. If it had been the other way round; that CBT had been favoured over the other three, then that would have been a problem. But as it is, CBT actually had a higher mountain to climb, not a smaller one, compared to the others.”

What he did not mention is, while CBT may not be synonymous with simplistic “positive thinking”, CBT still encourages optimism on the issue of improvement and recovery from CFS, and aimed to change patients’ perceptions about their symptoms and disability. The therapy manuals (and presumably the therapists) also told patients how effective and safe CBT is. This may be like a drug trial biasing patients in favour of the drug on every dose, and then saying that such bias does not matter because the drug was rated poorly before they first took their first dose.

Wessely claims the two most recent systematic reviews rated PACE as having a low risk of bias. However, both those systematic reviews (the Cochrane review and at least the full version of the P2P review) rated the PACE trial as having a high risk of bias in terms of (non)blinding.

Without properly accounting for all biases e.g. the potential biases of subjective outcomes in non-blinded trials, an otherwise well-designed RCT may simply be measuring those biases. Subjective outcomes are important, but less reliable in trials where the effects are small to modest and contradicted by objective outcomes.

2) The PACE trial did have a high retention rate, but excluded 80% of candidates, so it is possible that it was a highly pruned cohort. Those less likely to stick around or less able to tolerate exercise may have simply been less likely to join the trial in the first place. Those who had a preference for a particular therapy offered in the trial were explicitly excluded (which is fair enough, but can interfere with assumptions about the generalisability of the results).

3) Wessely gives the impression that changes to the protocol were minor and unimportant. But as covered by others elsewhere, some changes were major and likely inflated estimates of clinical response by several fold, which is not minor. All thresholds for clinical improvement in fatigue and physical function, on an individual level, were post-hoc additions. There is also evidence that the recovery criteria was changed after the authors were already familiar with each component of the recovery criteria. Overlap between full recovery, and trial entry criteria for severe disabling fatigue, is unacceptable. There is still unpublished safety data too.

4) The improvement of 6MWD scores in the GET group was small, did not reach their own definition of clinically useful difference (0.5SD) and has since been attributed (in the Lancet Psychiatry editorial on the mediation paper) to pushing harder on the test rather than actually being fitter. Walking was the most commonly chosen activity in the GET group, so there may have been a training effect. Overall, a range of objective outcomes from the trial suggests no meaningful improvements, which contradicts the assumptions and goals of CBT and GET i.e. to increase function and activity. Similarly, in other trials of CBT/GET, actometers demonstrated no significant difference between groups at follow-up. It is clear that these therapies do not improve activity or function in the same way as commonly promoted.

5) An independent re-analysis of the individual-level PACE trial data is almost useless if it simply does the same analyses that the PACE group themselves conducted. Patients want the protocol-specified outcomes.
 

Cheshire

Senior Member
Messages
1,129
I'm a bit perplexed by Andrew Gelman's comment (edit, response to @Jonathan Edwards ):

After my first couple of posts on PACE, I was struck by the complete lack of defenders of the study. In addition the behavior of journal editor Richard Horton didn’t seem appropriate, and I recalled that whole thing from ten years ago with Lancet’s Iraq mortality survey. Seeing as I’d heard nothing good about PACE from anybody, once I did hear from Wessely, it seemed like the best course of action to post his reactions side by side with the criticisms of the study.

It seems like he's trying to be the good guy of the classroom, that attempts to reconcile everybody. But doesn't question why nobody came to defend the PACE trial apart from its investigators.

It's a bit like his conclusion sentence:
So on the substance—setting aside the PACE trial itself—it seems to me that Wessely and the critics of that study are not so far apart.
Like "in fact, nobody's naughty, everyone is so nice".
Reminds me of Voltaire's Candide "Tout va pour le mieux dans le meilleur des mondes" (approximatively "Everything is fine in the best of the worlds").
 
Last edited:

Esther12

Senior Member
Messages
13,774
He seems to see the blogs as just public musings and places to start a discussion, rather than the equivalent of on-line articles or the sort of things Coyne does. It is a bit of a strange format imo, but he's quite open about it, so I think it's worth trying to judge his writing by the standards he sets for himself.

Like "in fact, nobody's naughty, everyone is so nice".
Reminds me of Voltaire's Candide "Tout va pour le mieux dans le meilleur des mondes" (approximatively "Everything is fine in the best of the worlds").

It's really normal for people who come fresh to a controversy with two opposing sides to start from the assumption that the truth is probably somewhere in the middle. In myself, I can notice that this is a bit of a misleading bias: a desire to feel like the sensible one who can appreciate all sides, instead of taking the time to really dig in to the nitty gritty and work out which arguments are flawed, and which are reasonable. There are so many complicated disputes in the world that one can never hope to understand all of them, and yet it's pleasing to feel that one has reasonable views on the important topics of the day... how frustrating to be human!

A lot of good and useful blogs comments under the post too, thanks to all responsible... I just always pay most attention to the worst ones.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Another good, if long, comment, I think:


Zach says:
January 13, 2016 at 4:44 pm http://andrewgelman.com/2016/01/13/pro-pace/#comment-259243

Andrew, thanks for your continued interest in the PACE trial. It’s a complex and sometimes nuanced subject, and I think it raises lots of interesting questions in relation to methodology and transparency in medical trials in general. For example, issues such as: “outcome switching” (i.e. using post-hoc endpoints instead of pre-defined endpoints); not sharing publicly-funded data with other researchers and the public; the reliability of open-label methodology; non-blinding of data; the lack of an appropriate placebo control of comparison intervention; bias surrounding self-report measures; influencing patient expectations by informing them that an intervention is highly effective, etc.

As others have said, such biases and weaknesses in trial design and methodology would be expected to demonstrate efficacy for homeopathy. It is interesting to note that CBT demonstrated no efficacy for any of the objectively measured outcomes used in the trial, and demonstrated modest efficacy only for the self-report measures [1,2,3]. A lack of improvement in objectively measured outcomes suggests that the illness itself has not been modified by the interventions. This is especially the case in this trial which attempted to increase physical fitness and activity but failed on objective measures.

The methodology used for the PACE trial would not be accepted as robust evidence for the approval of pharmaceuticals; pharmaceutical trials would normally require blinding, and a placebo control arm and/or comparison with an established intervention. (PACE used no placebo control, and it lacked a robust comparison with an established and previously tested intervention.)

Apart from the outcome switching, the lack of an appropriate comparison or control arm, and lack of transparency, perhaps there’s no single stand-out issue that would make a casual observer sit up and take notice with regards to methodological weaknesses, but when all of the issues are combined, it becomes interesting.

From a statistical point of view, there may not be much of interest in it. Most of the criticisms aren’t related to statistical issues, but are related to other methodological issues. Perhaps the most interesting statistical issue is the recovery criteria [4]. Some basic errors were involved here, such as using a non-representative demographic sample to determine recovery thresholds, and inappropriately using a mean and standard deviation for data that doesn’t have a normal distribution, to calculate the normal range. This is what has given us the ridiculous situation whereby a patient could deteriorate in both of the primary outcomes measures (fatigue and physical function) and be classed as ‘recovered’. This is not only related to the original Lancet commentary [5], but it also relates to a 2013 recovery paper [4]. Obviously, if “recovery” can indicate deterioration, then this isn’t helpful. Unfortunately health care professionals, family, and friends, see the discussion and headlines and can take them at face value. Even health professionals, and clinical decision makers, are rarely expected to dig into data to assess whether recovery criteria are appropriate.

The factually incorrect Lancet commentary resulted in erroneous media reports such as: “About 30 per cent of patients given cognitive behavioural therapy (CBT) or graded exercise made a full recovery to normal levels of activity, the study found…” [6] Some media headlines were outrageously exaggerated; The Daily Mail reported that ME patients should “push themselves to their limits” for the “best hope of recovery” [7].

Such headlines and misinformation have an accumulative effect on the patient community, and are not simply forgotten about and cannot simply be labelled as a historic issues, especially if the misinterpretations of data have not been corrected. If health-care providers believe that ME/CFS is not a real illness and can simply be cured with some exercise, then it harms patient care. ME/CFS is a notoriously neglected illness, according to the patient community.

In the discussions on your blog, we have mainly been discussing the results at 52 weeks post-randomisation [1], but long-term outcomes at a median of 2.5 years have also been published, which showed no difference between trial arms [8]. i.e. when CBT and GET were added to standard medical care, there was no clinical benefit in the long-term. (CBT and GET were no different to receiving no-treatment.)

Unfortunately, the PACE investigators have not been clear about these outcomes either in their discussions in the published paper or in their communications with the media. They have repeatedly stated that there was a ‘sustained’ clinical benefit from CBT and GET, which the ME community has had to spend a lot of time correcting [13]. For example, both the follow-up paper itself and the accompanying press release state: “Researchers have found that two treatments for Chronic Fatigue Syndrome have long term benefits for people affected by the condition.”

However, during the period between 1 year and 2.5 years after randomisation, SMC & APT actually performed better in the self-report primary outcomes than CBT & GET. So it might have been the case that CBT and GET inhibited improvement in health after 52 weeks. This is the opposite of a ‘sustained benefit’.

I’m not aware of the investigators having publicly clarified to the media that there was no treatment effect at long-term. In addition to the confusion surrounding the recovery outcomes, this lack of clarity adds further confusion to the mix.

A further media onslaught of misinformation, related to the follow-up study, has further added to the confusion surrounding the illness. In a glowing report of the PACE trial, The Daily Telegraph repeated the claims of the press release and went as far as to claim that ME/CFS is not a chronic illness [9] (This claim has now been retracted after complaints by the community) [13]. Even the NHS Choices website (which is usually a reliable source) has repeated the spin that there was a long-term benefit from CBT and GET [10].

One other thing. In your blog, you refer to Simon Wessely’s comment on the National Elf Service Blog [11]. In the comment, Simon Wessely says that he had limited involvement with the PACE trial: “I was not on the ship, neither as passenger or crew. I helped recruit some patients to the study from our clinic, as did many doctors, but that was as far as it went. I am not an author on the ship’s log, but I am not a neutral observer.” [11] However, this doesn’t seem to be supported by other available information. e.g. the trial protocol says: “The authors thank Professors Tom Meade, Anthony Pinching and Simon Wessely for advice about design and execution.” [12] And the acknowledgements of the 2011 paper says: “Simon Wessely commented on an early draft of the report.” [1] I just wanted to mention this to clarify that Simon Wessely is not an independent observer.

In conclusion, there are many different methodological issues associated with the PACE trial, but there are also issues related to misinformation surrounding the promotion of the therapies, and how this affects the patient community over the long term.

References:

1. White PD, Goldsmith KA, Johnson AL et al. (2011) Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 377:823-36.
2. McCrone P, Sharpe M, Chalder T, Knapp M, Johnson AL, Goldsmith KA, White PD. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost-effectiveness analysis. PLoS ONE 2012; 7: e40808.
3. Chalder T, Goldsmith KA, White PD, Sharpe M, Pickles AR. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2015; 2:141–52
4. White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M. (2013) Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med. 43:2227-35.
5. Bleijenberg G, Knoop H. Chronic fatigue syndrome: where to PACE from here? Lancet 2011; 377:786-8
6. The Times: http://www.thetimes.co.uk/tto/health/news/article2917876.ece
7. Daily Mail: http://www.dailymail.co.uk/health/a...-exercise-best-hope-recovery-finds-study.html
8. Sharpe M, Goldsmith KA, Johnson AL, Chalder T, Walker J, White PD. Rehabilitative treatments for chronic fatigue syndrome: long-term follow-up from the PACE trial. Lancet Psychiatry 2015; 2:1067-74.
9. Telegraph: http://www.telegraph.co.uk/news/hea...f-ME-with-positive-thinking-and-exercise.html
10. NHS Choices: http://www.nhs.uk/news/2015/10Octob...rapy-useful-for-chronic-fatigue-syndrome.aspx
11. http://www.nationalelfservice.net/o...syndrome-choppy-seas-but-a-prosperous-voyage/ (accessed 13th Jan 2016.)
12. White PD, Sharpe MC, Chalder T, DeCesare JC, Walwyn R; PACE trial group. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol. 2007 Mar 8;7:6.
13. http://www.meassociation.org.uk/201...es-with-the-daily-telegraph-19-november-2015/
 

BurnA

Senior Member
Messages
2,087
Is there any way we could conduct, or apply for funding to perform an unblinded trial with a drug, on a group of cherry picked patients who we tell that the drug works really well, with no objective outcomes, and see the response we get ?
 

A.B.

Senior Member
Messages
3,780
Is there any way we could conduct, or apply for funding to perform an unblinded trial with a drug, on a group of cherry picked patients who we tell that the drug works really well, with no objective outcomes, and see the response we get ?

Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma
index.php



index.php


http://www.nejm.org/doi/full/10.1056/NEJMoa1103319#t=article
 

Attachments

  • objective.png
    objective.png
    61.4 KB · Views: 275
  • subjective.png
    subjective.png
    71 KB · Views: 276

Dolphin

Senior Member
Messages
17,567
(Not important)

By the way, I followed the link for this comment:

When CBT is hammered into the RCT framework the end result is often an increase in clinical uncertainty
http://www.methodsappraisal.com/2012/08/bmc-psychiatry-adult-adhd/

I don't think it tells us anything about CBT in general. It was simply a critique of a particular ADHD trial where all participants got CBT with some getting medication and others not. The authors of the trial had presented it as a controlled trial of CBT when in fact there was no control for CBT as all participants had CBT.

It mentioned reviewers' guidelines for BMC Psychiatry which I thought might be interesting given how they were described but in fact were not very interesting:
http://webcache.googleusercontent.com/search?q=cache:xKNqE07Y6nAJ:www.biomedcentral.com/bmcresnotes/about/reviewers &cd=1&hl=en&ct=clnk&gl=ie
 
Last edited:

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
I thought this comment was worth looking at since it's so weird.

There's more Expirate to go around on the Bad Science forum. ;)

Is there any way we could conduct, or apply for funding to perform an unblinded trial with a drug, on a group of cherry picked patients who we tell that the drug works really well, with no objective outcomes, and see the response we get ?

I'm sure they'd write something nasty about such a trial on sciencebasedmedicine.org
 
Last edited: