• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

oceanblue

Guest
Messages
1,383
Location
UK
The rate of psychiatric disorders in the PACE Trial was 47%. Reading some papers, that might not seem high.

In their 1998 book, Friedberg & Jason point out that the SCID (used in the PACE Trial) finds lower rates of psychiatric disorders in CFS than other screening methods such as the DIS. The figures they quote for SCID studies are:
- Hickie et al. (1990) 24.5%;
- Lloyd et al. (1990) 21%;
and
- Taylor & Jason (1998) 22%.
So the rates of current psychiatric disorders in PACE Trial patients (47%) are quite high.
That's really interesting - did the CDC use SCID for their studies too? But then PACE recruited on the Oxford Criteria, which have usually been supposed to have higher livels of psychiatric disorders. Does PACE give rates split by diagnostic criteria?

anyway, great digging
 

Dolphin

Senior Member
Messages
17,567
As I said, I'm not quite sure what exactly they are on about here but I supsect it's something to do with post-hoc ie if they are looking for any possible comparison (post hoc) there may be a convention that they need to use a p<0.01 (which might explain why they feel the need to mention it). But by specifiying in advance which comparions are key they can go back to using p<0.05. As they didn't spell out which comparison they had decided upon, and I suspect they won't tell us, it's probably not going to be fruitful to pursue this one. But it does give the impression they are making things up as they go along to suit.
I think you're right that they might not tell us.

Not sure about the post-hoc bit. If one was doing gene expression research for example, one could be testing thousands of things at once which one knows in advance i.e. it isn't post-hoc, but one most likely wouldn't use a 0.05 threshold. Using p<0.01 rather than p<0.05 would basically be done for multiple comparisons which don't necessarily have to be post-hoc comparisons.

They don't mention thresholds in Table 6 so might say they weren't misleading if some were p<.01 and some were p<.05. However, they can be "got" on using the 95% CIs in the Web Appendix.

Anyway, all very frustrating. I imagine the public probably think that researchers not associated with drug companies are more honest and trustworthy.
 

Dolphin

Senior Member
Messages
17,567
That's really interesting - did the CDC use SCID for their studies too? But then PACE recruited on the Oxford Criteria, which have usually been supposed to have higher livels of psychiatric disorders.
The only CDC studies I can think of that used it used the empiric criteria which isn't really the Fukuda criteria. You know the criteria where Jason and team found that 38% of those with Major Depressive Disorder but not CFS fit the (so-called) empiric criteria. The CDC's prevalence figures using this definition jumped from 0.235% to 2.54%! So I think CDC figures using those criteria are a waste of time. Peter White and co hide behind them in Lawn et al., saying the CDC found a rate of 57% using SCID.

But as you suggest, perhaps the figure of 47% isn't high for an Oxford criteria cohort.

Does PACE give rates split by diagnostic criteria? anyway, great digging
If you mean ME/CFS criteria, no.

In
Psychiatric misdiagnoses in patients with chronic fatigue syndrome.
Lawn T, Kumar P, Knight B, Sharpe M, White PD.
JRSM Short Rep. 2010 Sep 6;1(4):28.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2984352/
one gets information on a subset of the PACE Trial patients and the psychiatric disorders/diagnoses they had.

The final paper tells us that 34% had a depressive disorder.
 

oceanblue

Guest
Messages
1,383
Location
UK
The rate of psychiatric disorders in the PACE Trial was 47%. Reading some papers, that might not seem high.

But some papers use the CDC empiric criteria (rubbish) or other methods to diagnose psychiatric disorders such as questionnaires which may not be accurate as, for example, symptoms like fatigue, sleep problems, concentration/cognitive problems can be seen as evidence of psychiatric problems.

In their 1998 book, Friedberg & Jason point out that the SCID (used in the PACE Trial) finds lower rates of psychiatric disorders in CFS than other screening methods such as the DIS. The figures they quote for SCID studies are:
- Hickie et al. (1990) 24.5%;
- Lloyd et al. (1990) 21%;
and
- Taylor & Jason (1998) 22%.
So the rates of current psychiatric disorders in PACE Trial patients (47%) are quite high.

Looks like you've identified a substantial and important difference between PACE Oxford Criteria-defined participants and those in other CFS studies
 

Sam Carter

Guest
Messages
435
...

Ok. Are they claiming they thought GET and CBT would come out better on everything? - doesn't seem like clinical equipose (if being awkward). ...

Clinical equipoise is satisfied if there exists genuine doubt in the broader medical community about the relative efficacy of different treatments ie. clinical equipoise in a trial does not require that the investigator him/herself has no personal preference.
 

Dolphin

Senior Member
Messages
17,567
Clinical equipoise is satisfied if there exists genuine doubt in the broader medical community about the relative efficacy of different treatments ie. clinical equipoise in a trial does not require that the investigator him/herself has no personal preference.
Ok, thanks. I might have picked that term up incorrectly - haven't read too much on medical ethics. I suppose I was thinking of the ethical dilemma of offering patients a treatment which one believes is inferior in a myriad of ways to another one. So for example, when applying for the research, if they had said they believe that GET and CBT would come out on lots and lots of outcome measures, would they have got ethical approval?

(repeating myself)
They hedge their bets with this:
Differential outcomes
Because CBT and GET are both based on a graded exposure to activity, they may preferentially reduce disability, whilst APT, being based on the theory that one must stay within the limits of a finite amount of "energy", may reduce symptoms, but at the expense of not reducing disability.

---
Not sure if I've mentioned it, but if one looks at Table 3,
"Bonferroni values adjusted for five comparisons for every primary outcome",
they have made adjustments for the multiple comparisons there.

There are 40 comparisons in Table 6.
 

Dolphin

Senior Member
Messages
17,567
I just thought this through a little:
(from published protocol)

Results from all analyses will be summarised as differences between percentages or means together with 95% confidence limits (CL). The significance level for all analyses of primary outcome variables will be P = 0.05 (two-sided); for secondary outcome variables, P = 0.01 (two-sided) unless profiles of response can be specified in advance.
This presumably means that they believe some of these are suitable for the 0.05 threshold and some for the 0.01 threshold.

Did the secondary outcome measures they happen to show us all come from the 0.05 group (i.e. the ones they were most convinced CBT and GET came out best on).

That would mean we are not seeing the 0.01 ones, the ones they were less convinced CBT and GET came out best on.
 

Sam Carter

Guest
Messages
435
... So for example, when applying for the research, if they had said they believe that GET and CBT would come out on lots and lots of outcome measures, would they have got ethical approval?....

Yes, sadly -- in fact, this is exactly what happened. White was clear upfront(*) that _he_ believed CBT and GET to be superior to APT (and pacing, in general). This is the difference between 'personal equipoise' and 'clinical equipoise'.

(*) Trial Protocol p25
"""""""""""""""""""""""""""""""""""""""""""""""""
The CMO's working group concluded;
"Therapeutic strategies that can enable improvement include graded exercise/activity
programmes/ cognitive behaviour therapy/ and pacing." However this positive statement
was balanced in the report by other statements: first the concern of patient organisations
that graded exercise therapy (GET) may worsen symptoms and disability, and second that
pacing, although widely advocated by patients organisations, is as yet unsupported by
scientific evidence.
""""""""""""""""""""""""""""""""""""""""""""""""""""
 

Dolphin

Senior Member
Messages
17,567
Thanks Sam.
I might get around to reading up on medical ethics at some stage.

However, to get back to the other point relating to this.
There were 14 headings for secondary outcome measures.
However, that isn't the number of secondary outcome measures e.g. one of the items is: "An operationalised Likert scale of the nine CDC symptoms of CFS".
It would take a bit of mental thought to try to think them through. But let's say 25 secondary outcome measures with 5 comparisons for each. That's 125 comparisons.

You don't normally "get" to have 125 comparisons with p<.05 as one can pick up findings by chance (in 1 in 20 comparisons).
That's why they are referring to p<.01.

Results from all analyses will be summarised as differences between percentages or means together with 95% confidence limits (CL). The significance level for all analyses of primary outcome variables will be P = 0.05 (two-sided); for secondary outcome variables, P = 0.01 (two-sided) unless profiles of response can be specified in advance.

However what I think they are referring to in their statistical analysis is that sometimes one can miss things if one uses p<.05. So one can get a sort of "derogation" to use p<.05 for the ones one would be concerned would be missed if p<.01. However, I don't believe they would be allowed do it for all all the "25" outcome measures. [This is sort of "amateur" information I have picked up/summarising - most papers I have seen would have simply used p<.01 for these comparisons, or even stricter criteria].

So, to repeat the point in my last message, some secondary outcome measures will use the p<.01 threshold. But which ones are they? None of the ones they picked out used this threshold (as they say the figures in the web appendix are 95% CIs underneath). That means we are being shown cherry-picked outcome measures and could/should ask to see the outcome measure where they used p<.01 (i.e. the ones they were less confident would be different).

ETA: Actually, looking at protocol again, perhaps they have have permission to us 95% CLs - although it doesn't like it makes sense to use 95% CLs if it's a p<.01 test. Anyway, if it is the case that p<.01 is being used for some of the data presented, they haven't told which it applies to in the table and the text with lots of talks about the differences on the secondary outcome measures doesn't either.
 

Sean

Senior Member
Messages
7,378
It looks like the authors assumed a sedentary subpopulation of the general population is the one with mean 85. This is basically a guess. ("We know in advance that these people are merely sedentary members of the general population with poor mental hygiene.") It has substantial impact on statistical tests of significance.

Is that a direct quote from the authors, or just your take on their thinking?

Not having a go. Just that if they actually said that, it could be powerful ammo against them.

People can certainly argue that my suggested assumption about the subpopulation is wrong, or unreasonable. They would have a much harder time showing it would not produce the results shown by the study. In that case, the calculated standard deviations are meaningless for statistical inference. The effects of natural and artificial bounds predominate over any results from the study.

I am not statistically literate, and most of this kind of stuff has me struggling. But this particular point has been bugging me too (if I understand it correctly). If they are calculating the SD based on the study population, without reference to the general population, that would unjustifiably enhance the statistical significance of any treatment response.

They have given us the relative improvement (before/after stuff), but not the absolute frame of reference for the important broader context (real world comparisons to the general population).

Am I reading this right?
 

Sean

Senior Member
Messages
7,378
No- you're right Sean. I agree, we can't do it alone. I have a few ideas of some such people who to approach myself- but don't want to make it too public at this stage, for obvious reasons lol!

Wish you every success with that. We sure need every one we can get.
 

Sean

Senior Member
Messages
7,378
That is an interesting histogram, it puts the trial results into perspective if they are shown compared to the normal population.

attachment.php



[etc...]

This is the kind of diagram, of information vehicle, we need to clearly and quickly present and explain the reality of PACE, the underlying behaviour model of ME/CFS, and the daily difficulties we face. Almost everybody, regardless of their level of technical expertise, can readily understand the basic message of the data when presented like this.

I think that quite a lot of the critical info we are dealing with could be expressed in this way. Maybe just ten hard data based graphics like this will do more legitimate damage to their case than a million eloquent words in any forum (in no small part because the visual media will be able to use it). Even more so if we use their own data and definitions to construct the images.

So, what data to use, and how best to present it?

We probably should include our basic economic data, the immediate financial situation we find our selves in, and the long term consequences (the all too predictable consequences, sigh...).

(Somewhat ironically, the first person to effectively present statistical data in a graphic form was [drum roll]... Florence Nightingale, a pioneering statistician in her day. It's a small world.)
 

Sean

Senior Member
Messages
7,378

Sean

Senior Member
Messages
7,378
Peter White/Barts said the following in their submission on the draft NICE guidelines:


http://www.nice.org.uk/nicemedia/live/11630/36186/36186.pdf
"page 308 of 383"

-------- Full Exchange ---------

Peter White/Barts said the following in their submission on the draft NICE guidelines:
These goals should include recovery,
not just exercise and activity goals.
If it takes "years" to achieve goals, then
either the goals are wrong or the
therapy is wrong. What other treatment
in medicine would take years to work?
http://www.nice.org.uk/nicemedia/liv...6186/36186.pdf

That is a killer quote, that frames the PACE results nicely.
 

Sean

Senior Member
Messages
7,378
Nicely put. However, i'm not sure I quite share your optimism that their model will simply fold beneath the weight of evidence, though it surely it would it an ideal world.

It will eventually in this imperfect one too. Especially if it doesn't return people to work. Though it certainly will not happen tomorrow.

I think working out how to expose the real meaning of PACE, both in the media and in the scientific world, is a huge challenge for us all. Letters to the Lancet were a first step and I'm not quite sure where we go from here. Though, for me, the next step is more sifting through the PACE trial to really understand what's going on.

I know, all this awesome analysis, what's going to happen to it, and more particularly what are we going to DO with it?

The wiki looks like a good idea to me.

My dream is for a comprehensive rebuttal piece to be published somewhere like the BMJ, written by someone with credibility in the ME/CFS field that would take on maybe FINE as well as PACE, and use them to tackle the biopsychosocial argument head on. The biggest and bestest research came up with... nothing.

I also think it would be helpful to have, somewhere, a user-friendly exposure of PACE - perhaps as a series of shortish pieces in a blog. I'm still mulling this over - comments welcome.

We have to think about this a bit. We do need to watch out for follow up papers from the PACE team. Be ready to respond to the economic data, for example, when (and if) it becomes available.

There is some excellent info and analysis being done on this thread, some very important issues and points being raised. But fitting all this into letters, or even a full article or two, is going to be difficult. Needs some thought about how to collate it, and use it effectively (and legitimately).

We do need to maintain the momentum, but I would suggest at a somewhat more sustainable pace than recently.

It even perfectly explains why the subgroups did not show significant difference - because all we are looking at is change in perception, there is no significant therapeutic effect beyond placebo in any of the groups.

I don't think it even proves that patients changed their perceptions. Strictly logically speaking, all they can claim is that patients changed their own reports of their own behaviour (ie they changed their test taking behaviour), with no objective independent evidence to confirm any actual change in behaviour beyond the subjective test scoring process, and some good evidence to refute it.

Patients changed their response to being asked questions by psychs (and maybe authority figures in general), who had been heavily conditioning the patients to change their subjective response in exactly that very limited and non-therapeutic way.

Which is not the same as patients genuinely changing their perceptions, let alone that leading to improved overall health.

Circularity methinks.

The pacing they are referring to here is not what ME patients know is essential to their survival, that is to listen to your body and not overdo things to avoid precipitating Post Exertional Malaise (or Meltdown, as I prefer to call it) which is the cardinal symptom of ME.

They used APT, Adaptive pacing therapy. The aim is to achieve 'optimum adaption" by means of fixed rest times, and an activity diary.

This is important. The distinction between pacing as we patients understand and use it, and APT (a l PACE), needs to be highlighted. As does the way the way they presented APT to the trial participants (especially compared to how they presented CBT/GET).

...patience and keeping your brakes on may be just as important as increasing activity.
Peter White

One has to ask: why? If we are merely suffering from standard deconditioning – which is pretty easy to fix in somebody who does not have any serious physical limitations – then why take it so slowly, why should symptoms take so long to resolve? Really, it is just an excuse for why they get so little improvement from these therapies, even after extended 'treatment'.

Seems like they want it both ways.

••••••••••••••••••••

I have had a number of private discussions over the years about the way the CBT/GET school seems to be slowly and quietly changing their model to incorporate and be more like pacing (as we patients understand & use it), but of course without admitting it. I think they are quite vulnerable on this.

What do others think of that view?

•••••••••••••••••••

If PACE can't even deliver improvement in the majority of their cohort, never mind cure them, then clearly neither faulty thinking nor deconditioning play a major or specific role in the illness.

Time to move on.

Constantly repeating and dissemminating this simple message could be the biggest step forward we could make towards burying the psychosocial construct for good.

Publication of PACE could be our best opportunity to remove the roadblock to proper biomedical research.

Agree.

The collective objective data (PACE, FINE, and everything before), and failure of even the highly manipulated subjective data, to really show much benefit, simply cannot be denied any longer, no matter how effective their short term propaganda blitz.

This is now a fact, thanks in substantial part to PACE.

Unless PACE have some amazing statistical rabbit to pull out of their rear – like 38% of patients increased workforce participation by at least 10 hours a week – then they (and that model's supporters) have lost the scientific argument. There is nothing much left to test about this model. The basics have been done to death. It has been given many chances to prove itself, and simply has not delivered the goods. No way around that.

I believe we have seen the best from PACE, and that model, and it stinks. And they know it. That is why the PR blitz is so intense. This is their last big shot.



Don't want to overstate the case. But come on, folks, they have had over 2 decades of serious funding and establishment political support, and this is the best they can do in actual hard reliable numbers?

Are you freaking kidding me?

These guys have just handed us the data to finally rebut their approach. It will not be easy, or quick, there are serious political hurdles to overcome... But I genuinely cannot see how their model can survive proper scientific scrutiny of both the data and methodology. Quite the contrary.

Don't underestimate how quickly things can change when the circumstances are right. The emperor can only wear no clothes for so long. Eventually reality will kick in. It has happened before – many, many, many times in human history – and it will happen again. And I think it is very near time for that to happen here.

Just my 10 worth. Make of it what you will.
 
Messages
35
I have only been able to read a few posts so forgive me if someone has already said something similar. I had a visit from my Occupational Therapist on the ME team. She has spoken directly to Peter White via the Yahoo site for ME NHS workers. Their is immense disquiet amongst the OT's in the ME teams who deliver the care within the UK about PACE especially the fact that PACE did not test the pacing protocol that they use, but instead used Adaptive pacing which only incoorporates a third of the techniques. She tells me Peter White was adament that PACE results were crystal clear and would direct the way future funding is granted. ME NHS staff are also aware that the patients tested were probably not in the main suffering from ME, but they are, in my opinion, very wary about losing their jobs and are too frightened to speak openly to the press. My OT was the first to critisise the trial via their own website after five days and Peter White answered almost immediately and with forthright language. Intimidation? My OT assured me whatever happened professionals in the front line would not change how they approach this illness (I hope they do but in a positive way!). She knows we are ill and that pacing is only a coping mechanism, she is very frustrated. Peter White is so blinkered and desperate to keep a hold on his powerbase no matter how much suffereing he causes. Keep up the good work you are doing. Got to go, too tired.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Thanks for the summary Sean.

It is still important that we address the problem of the cohorts- we can't forget this issue, and here's why.

Lacklustre results notwithstanding, this trial has been spun, both in the Lancet and in the press, as substantiating the safety of CBT/GET for ME sufferers (even if they use their term 'CFS/ME'). These results are generally no more impressive than any CBT/GET trials done in the past, but that wasn't necessarily their primary aim. This appears to have been to dismiss the claim that CBT/GET is unsafe for ME sufferers, because this is an extremely serious allegation.

Now they've undertaken some outstanding ontological gerrymandering to do this, but they've managed to pooh-pooh concerns about safety, and THIS will be the serious issue people will be faced with in the future.

Showing how poor the results of this trial are is one thing (in important one). But it will still be business as usual unless we can show that CBT/GET is still potentially unsafe. The reasons we know CBT/GET has not been established as 'safe' for ME sufferers are:

1. The PACE cohorts have potentially eliminated all ME sufferers from the trial. AT BEST, very few will have got in. Maybe none at all were in there. If this happened, it will have been achieved at the doctor examination/history taking stage, which WAS ad hoc (the only standardised form was a sign off by the research nurse after the doctors had 'screened' the patients). It would be highly problematic for the doctors to include any people with neurological deficit in the trial- because those deficits may have represented other neurological illnesses, like MS etc. I believe it is significant that so many people attending the 'specialist clinics' (over 1000) were deemed not to have met Oxford. Previous 'CFS' research cohorts in the UK have been worked so that people with organic dysfunction seen in ME (say Canada, even the historical ME case descriptions etc.) are excluded from these.

2. There is uncertainty over how 'Reeves' were used. On one table they place them as a sub-group of the cohort (which might lead one to believe they were inclusionary criteria performed after Oxford). But the text on page 2 shows that Reeves were used for exclusionary purposes (to "exclude alternative diagnoses") along with NICE (those are the two references given here). There is no literature on the PACE protocol that I can see that sets out standardisation of Fukuda (or Reeves) inclusion or exclusion requirements.

3. As someone has already said, 47% of the cohort had a psychiatric disorder. Now - there is some strange comment on the Pace Trial protocol about the "grey box ineligible for trial" because even on the pdf- there are three shades of grey (and two textures of 'hashed' and 'smooth'!) But it looks like all sorts of people were eligible for inclusion, including agoraphobics, any phobics, OCD, PTSD, and lifelong psychosis, and there appears some confusion between the SCID form and the 'Oxford form about inclusion/exclusion of bipolars, and schizophrenics! Funnily enough- considering the frequent claims about 'personality disorder' in CFS - these are not included as exclusions (so mean all sorts of personality disorders could be included).

4. The PACE version of the London criteria used actually a diagnosis of 'ME' based on: Exercise induced fatigue (who doesn't get tired after exercise?!) but the 'exercise'/'exertion' has to be 'trivial' in self report; impaiment of short term memory and loss of concentration; fluctuation of symptoms (ubuquitous in all health states and difficult to quantify); 6 months plus duration; no primary depressive illness or anxiety/'neurosis'. That is all that is necessary to meet the criteria for ME (though don't get me started on the instability of the terms anxiety and neurosis!)

5. They have not addressed the issue of abnormal physiological response to exertion, either within the biomedical literature or the reports from patients. This is a major omission. They have NOT considered the differential cohort that would have been established by applying say the Canadian Criteria either, even though this was brought to their attention a good few times. This should have been a limitation of study item (I note there was no such section in the article).

6. Obviously seriously affected/ bedbound etc. weren't included. But they are likely to try and claim slightly and moderately affected are still 'safe' with CBT/GET (and pacing is useless).

There is more to be analysed in the PACE documentation around the cohort. I'm trying to get a T-test done on the subgrouping of patients shown in the table 1 on page 5 of the article pdf, for example.

If anybody is particularly interested in this part of the analysis (cohort problems) and thinks this is worth pursuing and wants to take part - discuss, maybe backchannel, let me know.
 
Back