• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

oceanblue

Guest
Messages
1,383
Location
UK
Exercise and effect on walking: PACE CFS similar to MS

I've been trying to find out how PACE therapies work when applied to illnesses with an accepted organic aetiology and the comparison with MS is very interesting. With MS there is an accepted disease process but some researchers believe deconditioing is an important factor too, which can be treated with exercise therapy.

A 2010 review says:
Physiological deconditioning may prove to be an important and reversible contributor to mobility impairment, over and beyond the disease itself, in persons with MS
This review quotes a separate meta-analysis (Motl, 2010), which looked specifically at the effect of exercise on walking ability and found a mean effect size of 0.19 (0.09-0.28). This is small/trivial and and not much lower than the mean effect size for 6MW for GET of 0.34 (which amounts to a small effect). [note for wonks: the meta-analysis uses Hedge's g, which is very similar to Cohen's d figure I calculated for 6MW].

So, the effect of exercise on walking ability for CFS from the PACE trial is similar to that found for a meta-analysis of exercise for MS, an illness that may have some deconditioning but also has an accepted, organic disease process.

The PACE 6MW results for GET provide evidence that there is more to CFS than deconditioning. Oh, and of course, there was no improvement in 6MW for CBT in PACE.
 

biophile

Places I'd rather be.
Messages
8,977
eric_s wrote: And what good news? That a cohort that does not represent ME/CFS at the end of some treatments will walk 30 or so meters more in a 6 minutes walking test compared to before? And still only about half the distance that healthy elderly people can do?

Sean wrote: Better check that claim, might need some qualification. IIRC, they certainly do worse than healthy 70-80 year olds, but nowhere near half as well. (Average age of PACE patients is 38.) More like maybe 15 % worse, which is still a damning indictment of the PACE results and the CBT/GET model.

I've had a decent look at the 6WMD of healthy and medical populations and I am still working on a visual comparison of what I have found. It isn't looking too good for PACE at all. Most healthy cohorts average about 600-700m. Averages of around 300-500m are quite low and are seen in a wide range of serious medical disorders. The GET group average was only 379m at 52 weeks of "exercise".

Yes, "half the distance" is roughly close, as a subgroup from Bautmans et al 2004 "completely healthy" (strict definition) elderly participants (aged 627yrs) scored on average 696151m (n=58, 20M/38F).
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Nice job. I liked the animations. The truncated scaling in the Lancet paper was one of the first things I noticed too when examining it for the first time, it is good to see a more honest presentation of the data. However this applies more to the PF/SF-36 scores than the CFQ, because the majority of the general population are scoring 95-100/100 for the former but more like 11/33 for the latter.

Also, with the first animation I'm not sure if we can say with certainty that [SMC alone] was better than [GET alone] because there was no adequate controls. The second animation raised an interesting point, the spread or distribution of the scores. It is indeed possible that a small proportion of patients did much better than the others and are skewing the mean(SD).

Thanks for the kind words, Biophile.

You are entirely right of course. The difficulty is in choosing how to make people who avoid technical discussions appreciate the overall pattern to the findings. Making it simple is such a difficult challenge: all the data is so unreliable that it is impossible to make simple statements that are fully accurate without getting bogged down with detail. The only real question is "Is it fair?". I hope that it is. Our aim was not that it would show any of you anything new - you are all converts to the cause anyway, and understand the complexities - but that it might stimulate some who are not yet converts to take a proper look at the study and reassess it. It would have been harder to simplify the PF/SF-36 situation in the same way.

Actually I did try producing sets of data for the GET/Chalder scores that matched the baseline (mean 28.2, standard deviation 3.8) and the final result (mean 20.6, standard deviation 7.5). With a cut-off of 33, it was a challenge and quite interesting.
 

oceanblue

Guest
Messages
1,383
Location
UK
The truncated scaling in the Lancet paper was one of the first things I noticed too when examining it for the first time, it is good to see a more honest presentation of the data.
Truncated scaling is dodgy for a non-specialist audience eg in a newspaper, but is widely accepted in scientific journals to save space, so long as the break in the axis is clearly shown, as it was in this case. So in this particular instance I'm pretty sure the authors were merely following standard practice rather than trying to decieve. The deception they managed in other ways.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Truncated scaling is dodgy for a non-specialist audience eg in a newspaper, but is widely accepted in scientific journals to save space, so long as the break in the axis is clearly shown, as it was in this case. So in this particular instance I'm pretty sure the authors were merely following standard practice rather than trying to decieve. The deception they managed in other ways.

Yes, I think you are right. But I am not convinced that many in the medical world are up to that level of statistical perception. I know that sounds a bit arrogant, but I have taught lots of people maths who have gone on to medical careers, and they didn't choose to do that because they were great at maths (thank goodness!).

The "deception" that really bothers me is that the study focused on snapshots of the distribution of scores in the groups rather than the distribution of improvements in scores. I'm finding it hard to put it into words, but it is as if you have invested 4 million in various shares, and at the end your advisor tells you that the average gain was 3% with a variation of plus or minus 7, and left it at that. Anyone investing that kind of money would want to know which ones paid off, and which ones didn't. They haven't shown any interest in doing that. (My analogy is rubbish, but I keep trying to come up with a way of explaining it to non-mathematicians, and keep failing.) It is as though you get several overhead snapshots of the Grand National at different stages, but with no way of telling which horse is which.

With all that time spent in Specialist Medical Care, they must have pretty thorough information about each patient: I would have spent as long as it takes to try to use that information to determine some sort of way of defining who exactly is helped by CBT/GET and who is harmed (and looking at the distributions, there must have been a very small minority that did benefit to a quite reasonable extent).
 

wdb

Senior Member
Messages
1,392
Location
London
I would have spent as long as it takes to try to use that information to determine some sort of way of defining who exactly is helped by CBT/GET and who is harmed (and looking at the distributions, there must have been a very small minority that did benefit to a quite reasonable extent).

I think the theory that the authors push is that CBT/GET are safe effective treatments for anyone who suffers fatigue as a primary symptom regardless of any other group they may fall into, with this in mind I suspect they would be extremely reluctant to perform any tests that might uncover any sub-groups are not helped, they would have to go and rethink the theory then.
 

Min

Messages
1,387
Location
UK
Prof Michael Sharpe is being interviewed on ABC radio on Monday
http://www.abc.net.au/rn/healthreport/stories/2011/3192571.htm
'Comparison of treatments for chronic fatigue syndrome - the PACE trial - Health Report - 18-April-20'

Let's not forget Prof Sharpe's comment made about us at a lecture hosted by the University of Strathclyde in 1999 entitled "M.E. What do we know (real illness or all in the mind?)":


"Purchasers and Health Care providers with hard pressed budgets are understandably reluctant to spend money on patients who are not going to die and for whom there is controversy about the 'reality' of their condition (and who) are in this sense undeserving of treatment.


"Those who cannot be fitted into a scheme of objective bodily illness yet refuse to be placed into and accept the stigma of mental illness remain the UNDESERVING SICK of our society and our health service"
 

Dolphin

Senior Member
Messages
17,567
The "deception" that really bothers me is that the study focused on snapshots of the distribution of scores in the groups rather than the distribution of improvements in scores. I'm finding it hard to put it into words, but it is as if you have invested 4 million in various shares, and at the end your advisor tells you that the average gain was 3% with a variation of plus or minus 7, and left it at that. Anyone investing that kind of money would want to know which ones paid off, and which ones didn't. They haven't shown any interest in doing that. (My analogy is rubbish, but I keep trying to come up with a way of explaining it to non-mathematicians, and keep failing.) It is as though you get several overhead snapshots of the Grand National at different stages, but with no way of telling which horse is which.

With all that time spent in Specialist Medical Care, they must have pretty thorough information about each patient: I would have spent as long as it takes to try to use that information to determine some sort of way of defining who exactly is helped by CBT/GET and who is harmed (and looking at the distributions, there must have been a very small minority that did benefit to a quite reasonable extent).
People should see the need for it - it's "hardly rocket science" that in a medical setting, one might use such information to see who prescribe a therapy too.

They say they are going to look at the issue, to some extent:
We plan to report relative cost-eff ectiveness of the
treatments, their moderators and mediators, whether
subgroups respond differently
, and long-term follow-up
in future publications.
However, this may be as unsatisfactory as their claim there is no difference for those with ME (London criteria) or International critera, CFS.

In case anyone missed it, there are the predictors they specified in the published protocol:
Predictors
1. Sex

2. Age

3. Duration of CFS/ME (months)

4. 1 week of actigraphy [18] (as initiated at visit 1 with the research nurse)

5. Body mass index (measure weight in kg and height in metres)

6. The CDC criteria for CFS [1]

7. The London criteria for myalgic encephalomyelitis [40]

8. Presence or absence of "fibromyalgia" [41]

9. Jenkins sleep scale of subjective sleep problems [37]

10. Symptom interpretation questionnaire [34]

11. Preferred treatment group

12. Self-efficacy for managing chronic disease scale [32]

13. Somatisation (from 15 item physical symptoms PHQ sub-scale) [35]

14. Depressive disorder (major and minor depressive disorder, dysthymia by DSMIV) (from SCID) [30]

15. The Hospital Anxiety and Depression Scale (HADS) [38] combined score

16. Receipt of ill-health benefits or pension

17. In dispute/negotiation of benefits or pension

18. Current and specific membership of a self-help group (specific question)
 

Esther12

Senior Member
Messages
13,774
Prof Michael Sharpe is being interviewed on ABC radio on Monday
http://www.abc.net.au/rn/healthreport/stories/2011/3192571.htm
'Comparison of treatments for chronic fatigue syndrome - the PACE trial - Health Report - 18-April-20'

Let's not forget Prof Sharpe's comment made about us at a lecture hosted by the University of Strathclyde in 1999 entitled "M.E. What do we know (real illness or all in the mind?)":

Hi there - those quotes are a bit out of context, and he went on to argue that patient's should have access to health care. Sorry - a bit wiped at the moment so can't dig out the details.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
The "deception" that really bothers me is that the study focused on snapshots of the distribution of scores in the groups rather than the distribution of improvements in scores.

With all that time spent in Specialist Medical Care, they must have pretty thorough information about each patient: I would have spent as long as it takes to try to use that information to determine some sort of way of defining who exactly is helped by CBT/GET and who is harmed (and looking at the distributions, there must have been a very small minority that did benefit to a quite reasonable extent).


This is a very important point Graham. If you refer to Table 3, primary outcomes at 12, 24 and 52 weeks, its clear to see that the degree of variance in each 'treatment' arm increases markedly during the trial period with the result that, what may have originally been thought to be a relatively homogeneous group is seen to become less so.

This is what you would expect to see where individuals respond differently to 'treatment' or whose illness takes a different natural course and is also what you would expect to see in a 'mixed' cohort.

As the variance increases it becomes less likely that the group mean improvement accurately reflects individual scores which also suggests some are showing large improvements and some little or no improvement or indeed a deterioration.

I would have thought there was ample scope for some data mining around the individual scores and their trends.
 

biophile

Places I'd rather be.
Messages
8,977
Issue of "safety" in PACE, data obscured by a large grey area

In post #490, Dolphin very briefly raised the issue of "non-serious adverse events" (saying that they were very common, that people have not paid much attention to them because of the title, and that the title may be misleading). Other posters may have also raised the issue of safety as well, but Angela's concerns helped me realise this issue is as important as other major flaws discovered so far. I think the key to understanding the relative "safety" of PACE is a synergy between the following two issues:

(1) The "safety net" of caution in the trial. I have already covered this before, but basically, although encouraged to increase activity if possible (and encouraged to maintain activity during exacerbations), patients didn't actually have to do this if they didn't want to. The authors seem to imply there was an increase in activity by nature of the CBT/GET rationale but then provide no data on changes in overall activity. It is possible that there was no increase in activity on average, which according to other studies using actigraphy and finding no objective improvements to overall activity, is hardly surprising. Attempts at increasing activity would have been made as per protocol, and if done very gently and cautiously, most incidents of symptom exacerbation could be "contained" within the questionable classification of adverse events.

(2) I have underestimated the importance of how adverse events have been classified in PACE, being fooled initially by the relatively low rates of serious events and drop-outs. There is a high threshold for a "serious adverse event", arbitrary decisions were made about whether any such event was a reaction to the therapy or not, and we are given no data whatsoever for "non-serious adverse reactions" to therapy despite being in the original protocol (both the short published one and the long unpublished one).

From the original (published) protocol, page 12:

Adverse outcomes (score of 57 of the self-rated CGI) will be monitored by examining the CGI at all follow-up assessment interviews [49]. An adverse outcome will be considered to have occurred if the physical function score of the SF-36 [28] has dropped by 20 points from the previous measurement. This deterioration score has been chosen since it represents approximately one standard deviation from the mean baseline scores (between 18 and 27) from previous trials using this measure [23,25]. Furthermore, the RN will enquire regarding specific adverse events at all follow-up assessment interviews.

Again they use the "one standard deviation from the mean" calculation, which like the definition of a "normal" PF/SF-36 score, may be misleading (haven't looked into that yet but I imagine going from 30 to 10 is worse than going from 60 to 40). From the original (published) protocol, page 13 (describes the classes of adverse effects, some of which I quote below, I won't quote the serious categories because more detailed information can be gained from the larger unpublished protocol, more on that later):

Adverse events (AE) are any clinical change, disease or disorder experienced by the participant during their participation in the trial, whether or not considered related to the use of treatments being studied in the trial.

Non-serious adverse events or reactions will be assessed by the RN at each follow-up assessment interview. A risk assessment has been undertaken, and we have concluded that the therapies are of low risk to participants. Non-serious adverse events will be reported according to the usual regulatory requirements.

From the Lancet paper:

For safety outcomes, we included non-serious adverse events, serious adverse events, serious adverse reactions to trial treatments, serious deterioration, and active withdrawals from treatment.[10] Adverse events were defined as any clinical change, disease, or disorder reported, whether or not related to treatment. Three scrutinisers (two physicians and one liaison psychiatrist who all specialised in chronic fatigue syndrome) reviewed all adverse events and reactions, independently from the trial team, and were masked to treatment group, to establish whether they were serious adverse events. Scrutinisers were then unmasked to treatment allocation to establish if any serious adverse events were serious adverse reactions. Serious deterioration in health was defined as any of the following outcomes: a short form-36 physical function score decrease of 20 or more between baseline and any two consecutive assessment interviews;[16] scores of much or very much worse on the participant-rated clinical global impression change in overall health scale at two consecutive assessment interviews;[25] withdrawal from treatment after 8 weeks because of a participant feeling worse; or a serious adverse reaction.

A persistent decline of 20 points in PF/SF-36 score would be deemed serious by patients too, but according to White et al anything less than that is tolerated. Note that we have another goalpost shift, an "adverse outcome" was originally a decline of 20 points in PF/SF-36 at the next assessment, but now for the replacement outcome (a "serious deterioration") it must persist for two consecutive assessments (there are other ways to meet criteria for a serious deterioration, but these are strict as well).

Also, in [Table 4: Safety outcomes], "non-serious adverse events" are common while "serious adverse events" and "serious deterioration" were relatively rare. SAEs were more common in the GET and APT groups compared to the CBT and SMC groups, but it was decided these were generally not related to the trial.

Also, in [Table 5: Participant-rated clinical global impression of change in overall health], the proportion of participants reporting being "much worse or very much worse" were similar and under 10% in each group, and in each group more than half reported "a little worse, no change, or a little better" in the CGI (however, the CGI was only used to measure outcomes at 52 weeks, not "events").

From the long unpublished protocol (p66):

[14.2 Non-serious adverse events and reactions]

Non-serious adverse events or reactions will be assessed by the RN at each follow-up assessment interview (see 10 & 11). A risk assessment has been undertaken, and we have concluded that the therapies are of low risk to participants.

Examples of expected nonserious adverse events have been identified, and these include:

Development of new mood disorder

Musculoskeletal injuries - e.g. ankle sprains etc.,

Transient exacerbation of fatigue or pain, expected as a normal reaction to CBT or GET in patients with CFS/ME, which does not have significant impact upon function (see 14.1.1 (a))

Development of new sleep disturbance

Falls (e.g. due to tripping, etc.)

Worsening of anxiety - e.g. health anxiety, exacerbated by a transient increase in symptoms

So they actually expected symptom exacerbation due to CBT and GET. And everything in the above list was within the authors' tolerance of "safe". As for their notion of exacerbation "which does not have significant impact upon function", see [14.1.1 (a)] which is under "14.1.1 Serious Adverse Events (SAEs)". Because (a) was "Death" (an absurd threshold for a significant impact upon function) I have to assume we are referred to the rest of the subsection or at least (d), again from the long unpublished protocol (p65):

[14.1.1 Serious Adverse Events (SAEs)]

An adverse event (AE) is defined as serious (an SAE) if it results in one of the following outcomes:

a) Death,

b) Life-threatening (i.e., with an immediate, not hypothetical, risk of death at the time of the event),

c) Requires hospitalisation (hospitalisation for elective treatment of a pre-existing condition is not included),

d) Increased severe and persistent disability, defined as: severe = a significant deterioration in the participant's ability to carry out their important activities of daily living (e.g. employed person no longer able to work, caregiver no longer able to give care, ambulant participant becoming bed bound); and persistent = 4 weeks continuous duration

e) Any other important medical condition which, though not included in the above, may jeopardise the participant and may require medical or surgical intervention to prevent one of the outcomes listed.

f) Any episode of deliberate self-harm

In the Lancet paper WebAppendix, the description for serious adverse events and serious adverse reactions are the same, as one of the following:

(a) Death; b) Life-threatening event; c) Hospitalisation (hospitalisation for elective treatment of a pre-existing condition is not included), d) Increased severe and persistent disability, defined as a significant deterioration in the participants ability to carry out their important activities of daily living of at least four weeks continuous duration; e) Any other important medical condition which may require medical or surgical intervention to prevent one of the other categories listed; f) Any episode of deliberate self-harm.

As you can see, the threshold for a "serious" event or reaction is high, and all events which aren't as bad as that are assumed by the authors to be "safe" and have no "significant impact upon function"! Further, we are not told how many "non-serious adverse EVENTS" were deemed to be due to the treatment ie "non-serious adverse REACTIONS".

Assuming it was reported/recorded and as long as the participant did not withdraw, it was possible for a patient to experience substantial post-exertion symptom exacerbation and even become housebound/bedridden due to CBT or GET then gradually recover over the next week or so, and have this classified as a "non-serious adverse event" having "no significant impact upon function" and used as evidence that CBT or GET are safe!

The proportion of patients experiencing "non-serious adverse events" was the same between SMC vs GET (93%). I don't know how to explain that, but it does seem very strange to me that 7% of patients in two moderately affected cohorts (one doing exercise) never experienced a single "non-serious adverse event" over 52 weeks while the other 93% experienced about 6 such events on average. I also don't know how to explain the relatively low drop out rates other than patients simply didn't have to push themselves if they didn't want to, everything was supposedly done with great caution, and for other reasons believed in or wanted to stick out the duration of the trial.

Also in the long unpublished protocol, form A6.35 on p209 on the PDF which asks:

[A6.35 Non-serious Adverse Event report log]

Start date of AE (dd/mm/yy);

Stop date of AE (dd/mm/yy);

Description of adverse event (Brief description);

Was the event related to trial treatment? (Definitely related / Probably related / Possibly related / Definitely unrelated / Uncertain);

Has participant withdrawn from trial follow-up (Yes / No);

Please rate the severity of the event, if unsure or concerned, consult with Centre Leader (Mild / Moderate / Severe);

Any medication or therapy taken as a result? (Yes / No).

So again, they clearly did record both qualitative and crude quantitative data on "non-serious adverse events", but information on whether these were deemed a reaction to the trial therapy was omitted from the Lancet paper. Therefore, it is plausible that a large proportion of these events were "reactions" to CBT or GET, apparently the authors even expected such reactions but chose not to report them. As Dolphin pointed out, it was possible to have a "severe" non-serious adverse event as well.

Basically, unless CBT or GET caused a severe and persistent decline in health, all adverse events/reactions are deemed insignificant. PACE are claiming and promoting CBT/GET as safe for CFS because a good majority of moderately affected Oxford criteria patients in the early stages of illness didn't end up dead, or hospitalised, or with a severe increase in disability for several weeks at a time, or attempt self-harm/suicide, or report feeling "much worse or very much worse", etc. Very sneaky on behalf of the authors, how PACE handled adverse effects from therapy strongly resembles spin doctoring!

Page 13 of the published protocol mentions the "Safety of participants":

There is a discrepancy between patient organisation reports of the safety of CBT and GET and the published evidence of minimal risk from RCTs. Surveys by Action for M.E. of its members suggest that people becoming worse with these treatments is caused by either rigidly applied programmes that are not tailored to the patient's disability, or by improperly supervised programmes [13-15]. PACE treatment manuals minimize this risk by being based on mutually agreed and flexible programmes that vary according to the patient's response. The RN will also carefully monitor for any adverse effects of the treatments.

How the authors interpret adverse effects and safety is an important clue into the "discrepancy" between published RCT's and patient surveys. Considering how the authors show little regard for the grey area between therapy being ineffective and therapy causing a "serious"/"severe" event, I think the people who have filled out the patient surveys have a different interpretation of what a significant adverse effect is for them based on their own personal experience, which can in part explain the "discrepancy".
 

anciendaze

Senior Member
Messages
1,841
This is a very important point Graham. If you refer to Table 3, primary outcomes at 12, 24 and 52 weeks, its clear to see that the degree of variance in each 'treatment' arm increases markedly during the trial period with the result that, what may have originally been thought to be a relatively homogeneous group is seen to become less so.

This is what you would expect to see where individuals respond differently to 'treatment' or whose illness takes a different natural course and is also what you would expect to see in a 'mixed' cohort.

As the variance increases it becomes less likely that the group mean improvement accurately reflects individual scores which also suggests some are showing large improvements and some little or no improvement or indeed a deterioration...
While it may not be obvious, this point is at the core of my arguments about the distribution assumed, as well as the idea of random walks for a null hypothesis. I've tried several ideas: various Gaussian distributions with different standard deviations, a combination of a Gaussian distribution with small SD with a uniform distribution (two different subpopulations), random walks starting from a cluster created by the sampling process, Gaussian distributions in two or three dimensions generating one-sided distributions in one dimension (Rayleigh or Maxwell-Boltzmann distributions). The idea that the sampling process created the apparent clusters which then diffused works very well to explain the data available. When contradictory assumptions have equal explanatory power, you should not put much weight on any of them, absent better tests.

There is an issue here which we cannot address without a better idea of the raw data used to produce group measures. If a defective definition of the illness caused them to include a few subjects who did not have ME/CFS, these would naturally show large gains over the course of a year, possibly without any treatment at all. Including a small percentage of these, and dropping a small number of subjects who were too ill the complete the trial, could produce all the change seen. Of course, if this happened, it would be easy to tell what was going on if you looked at raw data. Therefore, I don't expect to see data which addresses such questions.

Does this qualify as a conspiracy theory? Researchers are just as capable of self-deception as patients. Published work on confirmation bias is abundant.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
The idea that the sampling process created the apparent clusters which then diffused works very well to explain the data available. When contradictory assumptions have equal explanatory power, you should not put much weight on any of them, absent better tests.

There is an issue here which we cannot address without a better idea of the raw data used to produce group measures. If a defective definition of the illness caused them to include a few subjects who did not have ME/CFS, these would naturally show large gains over the course of a year, possibly without any treatment at all. Including a small percentage of these, and dropping a small number of subjects who were too ill the complete the trial, could produce all the change seen. Of course, if this happened, it would be easy to tell what was going on if you looked at raw data. Therefore, I don't expect to see data which addresses such questions.

Does this qualify as a conspiracy theory? Researchers are just as capable of self-deception as patients. Published work on confirmation bias is abundant.


The sampling process, or more specifically the trial entry criteria can only have produced an apparently homogeneous cohort at onset that would soon 'diffuse' as the different illnesses under the CFS umbrella asserted themselves in reponse to treatment and the natural course of the illness.


We know that the Oxford criteria are sufficently vague to allow entry to a wide range of patients who share the symptom of 'fatigue';

The additional entry criteria in relation to fatigue and physical function introduced floor and ceiling effects which functioned to compress the variance of scores at onset;

We don't know, but might suspect that the cohort consisted of a mix of 'true CFS' (sic); those with fatigue due to mood disorders; potentially those with post-viral fatigue and possibly those with 'ideopathic fatigue'. We do know that each patient will be at a different stage of their illness and that the prognosis for the various illneses under the CFS banner will also differ.

Once released from the 'snapshot' that placed them within the entry criteria for the PACE trial, its hardly surprising that some may have recovered naturally solely as a function of time; some may have responded well to CBT or GET (as is well documented for depression); some may have recovered more if they hadn't been restricted by the constraints imposed by the APT regime and I strongly suspect that some will have experienced significant setbacks as Biophile has just suggested.

They collected sufficient data to shed some light on these issues.

Like Anciendaze, I doubt they will.
 

Dolphin

Senior Member
Messages
17,567
Well done and thanks, Biophile, for your post: http://forums.phoenixrising.me/show...Trial-Protocol&p=172473&viewfull=1#post172473 on safety which seems to encapsulate most (not sure about all) of the points.

One could probably infer this from what you wrote but just to be explicit: what may happen is that people doing GET (or CBT) could feel worse/have worsen symptoms, do less for a period of days or weeks, and then start more activity again which isn't what GET is supposed to be about. This trial hasn't shown that if a person instead didn't reduce activity, that that would lead to no "serious adverse events or reactions".
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Ah, Mark, I feel we have let you down :)

But you are right and this is important work - 150,000-ish words of this thread is problably a little indigestible. I'm thinking of having a go myself, but it won't be for a good month as I'm still looking at other stuff. One option would be to have a new 'PACE Summaries - no geeky posts here' thread where people could post summaries of either the whole thing or parts that they felt were most important. It might be hard to get a single consensus document but we could probably agree on most of it and then people could add summaries of parts they feld were important.

Using the main forum, rather than the wiki, might give the project more attention though perhaps moderators and others would feel this was inappropriate. Comments welcome.

Gosh, no, nobody has let me down! :) The work on this thread has been fantastic! Sorry if my comments earlier were a little harsh; what I meant to do really was just ask what progress there had been on the summaries because I've been a bit out of touch with this thread recently.

I think the point I was getting at was understood though: it's the short and snappy summaries, the easily readable campaign materials for the general public, the simple yet rigorous and well-justified statements of key points - that's what so often doesn't seem to get done. It's in translating what we all know into formats that are suitable for key audiences - friends and family; medical professionals; the media...that's where we tend to fall down: we have to lay the case out on a plate for people if they are going to absorb it; we know we can't expect any of these people to do any work looking into any of it for themselves so we have to go the extra mile to make things easy for them to digest.

The job I'm talking about is one that tends not to get done in all kinds of contexts, and indeed I'm often found complaining about it in a professional context, so it's not at all unique to us. Perhaps it's just a personal bugbear of mine, but what I see so often is an enormous amount of great work going into major projects, a great deal of detail being explored, but that huge amount of work going somewhat to waste because the relatively tedious and trivial tasks that come at the end of the process, like summarising the key points, proof-reading, repurposing for different audiences, etc, so often don't get the attention they deserve. It's just a pattern I see all the time: the last leg of the journey seems to be the most tedious, and the most neglected.

So my earlier comment was aimed at highlighting this problem, and reminding people that this summarising is really important, and suggest that maybe it's time to turn attention to that.

Oceanblue's suggestions look good to me: a thread where people can post their written responses to the PACE trial; and/or a thread where people can post their summaries of its flaws and work together to improve those summaries. I don't actually think we will need to worry much about achieving consensus on this one; it seems like a technical task to me as much as anything, and I don't see any major differences between us over any points of detail.

I'll just suggest a starting point, which might help get things moving: begin with brainstorming a simple list of succinct one-sentence summaries of 'things wrong with PACE', and then group those criticisms into categories, which can then serve as headings for gathering together the more detailed comment. I think I sketched something similar earlier in this thread.

I wouldn't start by trawling through the hundreds of posts on this thread - that would be a huge task - but by jotting down all those simple one-sentence summaries on a new thread and working together towards a comprehensive list. Flesh out more detail after that high-level structure has stabilised. One approach which can work well is to have a thread in which the first post, or the first two or three posts, are constantly being updated with the suggestions made by others further down the thread. That does require one person to take responsibility for managing that first post though, and monitoring the thread in order to add in those points as edits to the first post...

Finally, I must say a huge thankyou to everyone who's working on this issue, and in particular those who've posted links to their summaries, graphs, animations etc. Some fantastic work there! Particular thanks to Graham and Bob, Wdb, and Biophile. Lots for me to follow up on...I have a lot of other work on at the moment, but I'm hoping to manage to contribute a bit to the summarising stage so I'll keep my eye on things and hopefully jump in soon...

Thanks again everyone, and keep up the good work! :)
 

Dolphin

Senior Member
Messages
17,567
Gosh, no, nobody has let me down! :) The work on this thread has been fantastic! Sorry if my comments earlier were a little harsh; what I meant to do really was just ask what progress there had been on the summaries because I've been a bit out of touch with this thread recently.

I think the point I was getting at was understood though: it's the short and snappy summaries, the easily readable campaign materials for the general public, the simple yet rigorous and well-justified statements of key points - that's what so often doesn't seem to get done. It's in translating what we all know into formats that are suitable for key audiences - friends and family; medical professionals; the media...that's where we tend to fall down: we have to lay the case out on a plate for people if they are going to absorb it; we know we can't expect any of these people to do any work looking into any of it for themselves so we have to go the extra mile to make things easy for them to digest.

The job I'm talking about is one that tends not to get done in all kinds of contexts, and indeed I'm often found complaining about it in a professional context, so it's not at all unique to us. Perhaps it's just a personal bugbear of mine, but what I see so often is an enormous amount of great work going into major projects, a great deal of detail being explored, but that huge amount of work going somewhat to waste because the relatively tedious and trivial tasks that come at the end of the process, like summarising the key points, proof-reading, repurposing for different audiences, etc, so often don't get the attention they deserve. It's just a pattern I see all the time: the last leg of the journey seems to be the most tedious, and the most neglected.
It shouldn't be forgotten that quite a few of us submitted letters which summarised at least some of the important points. Hopefully some of them will be published and this will help get across to others the points made.

Letter writing takes a bit of work and commitment; but it can do what you say and summarise main points - especially when the word count isn't too tight (the 250 word count for the Lancet is a bit "mean" - most other journals allow longer letters).

So basically, although wikis etc. are great, what I'd really like to see in the future coming out of discussions are letters. And it is exactly the sort of thing those hyping the efficacy or safety of CBT, GET, etc don't want to see happen - being challenged in front of their peers.
 

oceanblue

Guest
Messages
1,383
Location
UK
biophile: Issue of "safety" in PACE, data obscured by a large grey area
Thanks for the analysis. I agree with both your points - that there was a safety net of caution in the trial approach to activity (which isn't necessarily a bad thing) and the incredibly high threshold for serious adverse events makes it impossible if anything short of hospitalisable problems occur. It would also have been very interesting to see how patient's attributed the cause of their serious events as well as the 'independent' assessor (not that patient attribtuions would have any basis in reality...).
 

oceanblue

Guest
Messages
1,383
Location
UK
Mark
It's in translating what we all know into formats that are suitable for key audiences - friends and family; medical professionals; the media... we have to lay the case out on a plate for people if they are going to absorb it; we know we can't expect any of these people to do any work looking into any of it for themselves so we have to go the extra mile to make things easy for them to digest.
I agree with Dolphin that letters to journals are very important but I think you're right that we need material for other audiences too, and even for scientists it would be helpful to bring all the points together in one place. I know of at least one person who is working on a summary of the main problems with pace: hopefully it will get posted here and that might spur further work on user-friendly summaries.
 

Dolphin

Senior Member
Messages
17,567
Well done to Deborah Waroff who criticised the PACE Trial results at around 15:00 in the following piece:

White House Chronicle episode on CFS

http://www.whchronicle.com/2011/04/the-panel-discusses-chronic-fatigue-syndrome-an-orphan-epidemic

The panel discusses Chronic Fatigue Syndrome, an orphan epidemic
By Linda Gasparello
April 12, 2011 - 9:32 am

Guest: Amy Marcus, The Wall Street Journal; Frances Stead Sellers, The
Washington Post; Deborah Waroff, author; and Leonard Jason, DePaul
University

Click to watch-
http://www.whchronicle.com/wp-conte...whchronicle.net/upload/files/flv/WHC_3012.flv