PACE Trial and PACE Trial Protocol

biophile

Places I'd rather be.
Messages
8,977

Here and elsewhere, I really think you're onto something that's more potent than the depression/medication confounder. A staggering 80% of candidates were excluded from the trial. 3 sweeps of exclusions?! Others have already discussed on this thread the discrepancies between clinical cohorts vs research cohorts so I won't need to.

Diagnosis of exclusion has been taken too far by the biopsychosocialists. We would expect common and obvious explanations for CFS-like symptoms to be excluded, but now anything that even remotely suggests organic disease is excluded as well, to a fault. Such a CFS construct changes from an "unexplained physical syndrome" into a circular ideology of: let's exclude all symptoms or signs or tests which may indicate obscure pathological disease processes as well, (pretend to) wonder why no pathology is found, assume or claim CFS is a functional disorder, then imply blanket application to all forms of ME/CFS. If the DSM-5 continues down the path it is, they can always fall back on the new SSSD and CSSD categories whenever organic pathology is found!

Obvious neurological disorders and "organic brain diseases" would have been excluded from PACE. However, I think there is a grey area, or a limbo perhaps, between classical neurological signs and symptoms vs the more subtle manifestations which may not result in a diagnosis of a neurological disease but would exclude a patient from PACE. White (and Reeves) would want to exclude this grey area as much as possible to get a "clean" cohort. We know that White dislikes the CCC for including "neurological-like symptoms".

I enjoyed David Tuller's response to this claim from the principal investigators of the PACE trial: "Patients and their doctors now have robust evidence that there are two safe treatments that can improve both symptoms and quality of life, however the illness is defined." Tuller's answer was basically, your 3 different CFS cohorts were all subgroups of the same Oxford criteria, not a gold standard of truly different cohorts. I'd like to see White et al justify (with a straight face) the use of 60 as the threshold for "normal" levels of the physical function subscale of the SF-36, and then watch Bleijenberg & Knoop repeat their editorial with a straight face. Have any "non ME/CFS community" articles about PACE picked that up about the absurdly low threshold for normal.

wdb wrote on another thread: "I wish they'd stop calling it robust evidence, I'm pretty sure in other branches of medicine an unblinded trial with patient reported subjective measures, no comparable placebo control, and a modest 15% response over no treatment, would be considered extremely flimsy evidence."

And: "There are definitely double standards, imagine if someone did the same unblinded, subjectively measured, no placebo control trial for crystal healing or powdered goat hoof, and called it robust evidence, they would be a laughing stock."

ancientdaze wrote: Don't expect experienced psychobabblers to print any falsifiable hypotheses. Somehow an entire branch of medicine has lost sight of a central feature of science.

When I first started reviewing the research on CBT and GET for ME/CFS and was open to whatever the "science" suggested, even if it contradicted my own anecdotal experiences, I also came across polite but strong wording in patient advocate commentary such as "smoke and mirrors", which at the time I thought was probably an exaggeration of valid criticisms or at least a colourful description of them. However it seems this is accurate afterall! Much of it seems to be "ideo-psychological" research.
 

biophile

Places I'd rather be.
Messages
8,977
[Dolphin and oceanblue on the number of patients who opted out of the 6WMT and how it this could have inflated the average figures for the 6WMD]

PACE Table 6 (Secondary measures) - those who did the 6MWT: SMC n=118 (74%), APT n=111 (70%), CBT n=123 (76%), GET n=110 (69%)

I missed that! Most of the other measurements have 90%+ patients giving data.


Interesting.

[oceanblue on natural recovery]

Good point. The average participant was only ill for a few years and seemed to have relatively rapid improvements in the first 12 weeks. Placebo responses are also plausible.

urbantravels wrote: Another question - about the gap between efficacy and effectiveness. I think there is a name for this gap (i.e. the "something" effect) but I can't call it to mind. Is there a standard or average value for this? Or even a guesstimate at what it might be for a given treatment - any given treatment? Would that gap alone be enough to wipe out the "moderate" benefits of CBT/GET that PACE allegedly showed, when the treatments are actually applied in non-clinical-trial contexts?

This was a very controlled and clean cut trial which is unlikely to reflect what happens in the real world. So yes, I would imagine the efficacy-effectiveness gap is significant for CBT and GET in PACE. The same could be said for SMC, how many patients in the NHS receive STFU "therapy" and GTFO "therapy" rather than "standardised specialist medical care"? On the other hand, SMC probably didn't involve any exotic treatments some patients may try and find effective. White et al have argued that the poor results and negative reports from patient surveys are the result of badly applied therapy. I think Dolphin(?) has pointed out somewhere that many of these patients went to professional services. This may support the existence of a efficacy-effectiveness gap and is very concerning if CBT and GET will now be further "rolled out" into the NHS.

oceanblue wrote: I'm sure the authors wouldn't withhold data unless there was a very good reason and I've no doubt it's for our own good.

This is an interesting point from a general perspective beyond how data is cherry picked to boost the apparent "success" of a RCT. Here we are dealing with researchers who believe "abnormal illness beliefs" are of fundamental importance to ME/CFS and are concerned about patients joining support groups and going on the internet to read about their condition. I would not put it past such people to carefully phrase information or even lie to patients "for their own benefit", including assfacts about how effective CBT/GET is.

Dolphin wrote: And to repeat something that has been said at least once, if not more, the changes figures of 8 (SF-36 PF) and 2 (CFQ) used for clinically useful difference are artificially small because they are based on the SD which was artificially small because the same items were used for entry criteria.

Do you mean that the SD is low because of the cut-off points skew the distribution?

Dolphin wrote: Given the model for GET (i.e. symptoms/deconditioning are temporary and reversible), I think there should be an obligation on them to report the figures or otherwise one doesn't know if the model has been tested:

oceanblue wrote: I think it would be a good idea if someone - ideally one of the ME charities - formally wrote to the authors asking for publication of data promised in the protocol but curiously absent in the paper e.g. recovery rates. If that fails, there would then be the option of going to the MRC who funded to the trail to a massive extent and whose Trial Steering Group approved the protocol. It might be tricky for the MRC to turn down such a request.

Yes, shouldn't a publicly funded trial give open access to its raw data?!

urbantravels wrote: We've talked about the placebo effect, what about the nocebo effect in the "[evil version of] pacing" arm? If I recall correctly, there was discussion about how the "adaptive pacing" arm participants, as part of the protocol, were told their condition would not improve/could not improve, and that all they could do was to stay within the "envelope" (which was then defined in a very rigid way that most of us would find difficult to live with.) I do not know what, specifically, they were told, or if it's even true that they were told this.

This is a real possibility, reactivity bias is also a problem. The whole thing was unblinded too: "As with any therapy trial, participants, therapists, and doctors could not be masked to treatment allocation and it was also impractical to mask research assessors. The primary outcomes were rated by participants themselves. The statistician undertaking the analysis of primary outcomes was masked to treatment allocation."

[Marco on clinically useful improvement vs lack of objective validation for subjective measures]

Good point, especially for GET. It is unlikely the 41% of the GET group reporting feeling "much better or very much better" (vs 25% for SMC) had decent 6WMD scores.

Marco wrote: I can't recall where I saw this mentioned but someone suggested that any improvements seen in CBT and GET are due to boosting the feeling of 'self-efficacy' rather than anything specific to either therapy. On reflection it must have been a psychologist.

Dolphin wrote: [Stouten & Goudsmit 2010 re-evaulation of Prins et al 2001]: "The cognitive behavior therapy (CBT) program studied by Prins et al. is based on a model of chronic fatigue syndrome that posits that fatigue and functional impairment are perpetuated by physical inactivity, somatic attributions, focusing on bodily symptoms and a low sense of control. [...] The only variable in the model showing an effect of CBT was sense of control."

Which is course now makes the 75%-debunked model even more generic than it already was. Very interesting when considering talk about how CBT is "used in other medical conditions". There is CFS-specific CBT but it was would be ironic for the proponents of it if the mild successes with it were generic and had nothing to do with CFS per se.

oceanblue wrote: I went back to that graph of the distribution of SF36 scores for the general uk adult population (30% of whom are 65 or over) - and where the PACE results fit on it - and tried to calculate/estimate some numbers. For example, the baseline Sf-36 scores of 38 corresponds to about the bottom 10% of the UK population. The control SMC group at 52 weeks scored around 50, corresponding to the bottom 13% of the population. The CBT/GET groups at 52 weeks scored about 58, corresponding to the bottome 15% of the population. So the net effect of CBT or GET, after 1 year, was to move particpants from around the bottom 13% of SF-36 scores to around the bottom 15%. Let's party. 22% of the UK population used in this study reported a long-term illness, though the authors of the relevant study (Bowling) say the face-to-face interview method used probably leads to under-reporting of ill-health. This proably isn't an entirely fair way of presenting things, but it's at least as fair as the 'within the normal range' stunt pulled by the PACE authors. This underlying graph is taken from page 9 of the open access article (notations added by me): Bowling SF-36 normative data. Although the article and picture are freely available there may be copyright issues so please don't reproduce this pic.

This bar graph from Bowling et al 1999 really was a wonderful find. If I recreated it from scratch using the same data (and your modifications), would that evade copyright issues? Hopefully most of the other graphs I've been working on will be ready on the weekend.
 

anciendaze

Senior Member
Messages
1,841
At the risk of irritating some who have been following this discussion, I will try to make my point again, after that I will give up.

A normal distribution is defined by two parameters: mean and variance. (SD is tied to variance.) These to numbers tell you everything you can possibly know about that distribution. No matter how long you study it you can't extract a scrap of meaningful information beyond these parameters. Correlations and p-values depend on these parameters.

The whole point of this exercise was ostensibly to learn something about the population from which these 600 or so patients were drawn. If the purpose was to entertain 600 people and provide harmless employment for researchers, the British taxpayer should hear about this expenditure of 5M pounds. If the mathematical model of that population, a subpopulation of the general population, was so thoroughly flawed you can't even assign consistent meaningful values to those two parameters, any statistical inferences drawn concerning the population outside those patients actually in the study are invalid.

I don't think I am being some kind of purist to insist that numbers on which the whole subsequent argument depends should have some meaning. The alternatives I've tried to bring up show that there is nothing particular about the numbers chosen. They could have had other values. This would change the bounds used in the study, or at least the meaning of those bounds.

In fact the choice of bounds dominates virtually every aspect of the data. If this is an arbitrary choice by researchers, they can pretty well make the numbers say whatever they want.
 

urbantravels

disjecta membra
Messages
1,333
Location
Los Angeles, CA
Apologies for repeating the same long post in two topics, but I just realized I posted this, meant for this thread, in one of the NYT threads about Tuller instead:

OK, since I've brought up questions about the placebo effect and the nocebo effect, I wanted to take a look at the APT arm protocol to see what was really in there. I had, inexcusably, been relying on some chance comment someone had made about how rigid the PACE version of "pacing" was, and I think I may have even been the one to coin the term "the evil version of pacing."

On the face of the the APT used in PACE doesn't really look *that* bad or that radically unlike what we would understand as "pacing." But even on my first quick read I definitely saw some poison pills in there.

This is the one arm of the trial which claims to use a "pathological" model of ME, i.e. that it is a physical disease. The manual is sprinkled with familiar-sounding quotes from people with ME about how they learn not to do two major errands in one day, for instance, and how they learn to stop *before* they feel really exhausted, stuff like that.

To be noted, though: there is constant emphasis on how this approach should be communicated to patients as "not a cure", and that the best it can do for you is "create the conditions for natural recovery to occur." (So even this model doesn't contemplate ME as a disease that might be incurable, or that an individual might never recover "naturally.") The quotes from patients, though *we* know them all to be accurate representations of how pacing works - I gotta tell you those quotes sound like a lot of doom and gloom to the uninitiated. When I was first ill and would read statements like "don't do the laundry and get the groceries on the same day," I was NOT ready to hear that; in fact it would set off a fury of grief that I had to accept such awful limitations on my life, when I used to do dozens of things each day. The grief process involved in accepting that is a major, major undertaking - and these poor people weren't actually getting any emotional support about it, or any real reason to hope (say, by being told that research is ongoing and someday there might be better treatments available. Because as far as the authors of the PACE trial are concerned, all the necessary research has been done & *they* know the cure already - they're deliberately giving people in this arm a treatment that they themselves think is ineffective.)

On the other hand, the GET and CBT arms are filled with positive messages about self-empowerment, encouraging you not to think of yourself as really (or permanently) limited, "helping" you to "identify" your bad habits that are perpetuating this "vicious cycle" of fear of activity/deconditioning, etc., telling you over and over again that you can overcome this vicious cycle and improve your condition.

And then measure outcomes subjectively, after a good year of inculcating the proper attitude in each group of patients about what they can expect from their therapy.

Now think about the cohort issues: we've got a majority of patients in the trial who would never meet CCC, an unknown number of whom have never even experienced true PEM, a large number of whom probably have primary depression or some other fatiguing condition. Would being trained in "pacing" do these people any good at all? When the therapy is delivered with such a strong underlying message that "this probably won't help you improve at all"?

Even if you somehow accidentally got into this trial with real M.E. (it would have to be a mild case), the expectation that there is some "natural recovery" that might *possibly* occur would certainly lead to disappointment with the APT treatment. As far as I understand pacing, it's not going to be a cure or even make me feel dramatically *better* in any way; what it does is cut down on the worst of the suffering. Not an effect you'd feel if you weren't acutely suffering going in; and those folks were pretty well screened to eliminate anyone who was really suffering physically. And if you actually *were* fatigued because you were depressed and deconditioned, of *course* you wouldn't feel better after your 52 weeks with Eeyore being told to lie down and think of England. And you'd be pretty mad that you didn't even get any "natural recovery."

The folks hanging out with Tigger in the other two arms, where everything is wonderful and the power of positive thinking rules all, are being encouraged to believe they feel better. And, of course, if they really had been deconditioned and depressed, they might feel a bit better, especially in the GET arm - and they'd have that nice "sense of control" that they accomplished it through their own good efforts.

OK, guesses as to which group(s) get the placebo effect and which group gets the nocebo effect?

This is based on a very quick read and I'll have to delve deeper to flesh out these thoughts some more - some things still strike me as odd, such as the pacing group being *forbidden* to use heart rate monitors (?) and rely only on their "perception" of how fatigued they felt ... and the fact that some positive aspects of real pacing seem to have snuck their way into the CBT arm rather than being put in the APT arm.
 

Dolphin

Senior Member
Messages
17,567
Possibly still worthwhile submitting a letter

The rules for submitting letters to the Lancet are two weeks following publication.

One E-mail suggested this was two weeks after it was first published online but this would deny people who get the print edition the opportunity to write.

The print edition came out 4th/5th March (the 5th is a Saturday but perhaps that is true with Saturday post).

So I think one could still send in a letter if you have something to say.
250 words max, 5 refs max. (really 4 when quote original article) so one can't write forever.

Register at http://ees.elsevier.com/thelancet/ and then go back there and press "Author Login" and follow the steps.

Submissions will hopefully be collated. I'm not sure how many people will read this thread in the future but published letters will have an impact and even all collated letters would also, hopefully.
 

anciendaze

Senior Member
Messages
1,841
random walks and patient foresight

A second point which has bothered me has been how to incorporate patient beliefs into a random-walk model which might serve as a null hypothesis.

Actually modeling foresight in a computer is artificial intelligence. What I can do instead is to assume patients have some such beliefs which sometimes work, and check the effect these have on a random walk. This is not some off-the-wall suggestion. The idea of a self-perpetuating belief system is at the core of the psychosocial mode.

My first run of a model would have patients grouped according to selection at the time they entered the study moving up or down scales randomly. When this run is complete, I would then go back and examine those walks which showed a negative trend toward the end of the study. If I assume patients who acquire negative beliefs about the efficacy of the treatment they are receiving to improve their health drop out as soon as these beliefs become firm, I will eliminate those who show downward trend later in the study, regardless of their position on the scale at the time they drop out. A better model might use a weighted probability of dropping out at each step.

You don't need to eliminate patients at the bottom of the scale, at the time they drop out, to bias results. They leave based on their own perception of benefits they are receiving. The more effective a treatment is in eliminating only those with negative foresight, the more effective it will appear. You can run this on a computer and play with various weighting schemes to see the effect it has. What we see in this study is within the range of such effects even for modest numbers of dropouts.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
At the risk of irritating some who have been following this discussion, I will try to make my point again, after that I will give up.

A normal distribution is defined by two parameters: mean and variance. (SD is tied to variance.) These to numbers tell you everything you can possibly know about that distribution. No matter how long you study it you can't extract a scrap of meaningful information beyond these parameters. Correlations and p-values depend on these parameters.

The whole point of this exercise was ostensibly to learn something about the population from which these 600 or so patients were drawn. If the purpose was to entertain 600 people and provide harmless employment for researchers, the British taxpayer should hear about this expenditure of 5M pounds. If the mathematical model of that population, a subpopulation of the general population, was so thoroughly flawed you can't even assign consistent meaningful values to those two parameters, any statistical inferences drawn concerning the population outside those patients actually in the study are invalid.

I don't think I am being some kind of purist to insist that numbers on which the whole subsequent argument depends should have some meaning. The alternatives I've tried to bring up show that there is nothing particular about the numbers chosen. They could have had other values. This would change the bounds used in the study, or at least the meaning of those bounds.

In fact the choice of bounds dominates virtually every aspect of the data. If this is an arbitrary choice by researchers, they can pretty well make the numbers say whatever they want.


Seeing as I was the one who raised the issue of statistical purity, I'll reply.

There is no argument or disagreement here.

The fact that the PACE authors have used parametric statistics inappropriately is undeniable and can and should be highlighted. It logically follows that any analysis derived has no validity whatsoever. However this is a point best made by a professional statistician. I believe one of the PACE team is a medical statistician and would be the one to respond to such suggestions. I suspect the ensuing argument would be in the nature of 'oh no we didn't - oh yes you did etc' and they could probably pull some 'we performed a log transformation on raw scores to approximate a normal distribution argument or similar waffle (leaving aside that in doing so they would obliterate the very nature of the underlying distribution - but thats another matter). Whether of not highlighting this failing will convince anyone of the underlying 'bad science' is anyone's guess.

The other approach, which we seem to be taking here, is to set to aside this legitimate criticism, and to work from the data provided and the underlying statistical assumptions as they appear in the published paper. Accepting that they are rubbish, they are the basis on which the PACE authors are basing their limited success. Even accepting their erroneous assumptions, its relatively easy to point out startling deficiencies in their analyses, not least the pathetically low baselines set for 'normal ranges'.

So my last word on this is, there is a very valid point to be made on the ropey stats which invalidates all their results. But this point may be lost on many (particularly policy makers). There are also many points to be made on the data as presented which are probably likely to be more meaningful to the average observer. Equating 'normal' functioning as being that of a 65+ year old is a fairly damning thing to highlight.

If the PACE authors choose to talk about means and SD's then so be it.
 

Dolphin

Senior Member
Messages
17,567
One person's response to Lancet article

Here's one person's response which they said I could circulate.
(Ideally I would recommend responses that are referenced)

Improved CBT/GET sought: All we need is a better whip to flog a dead horse with, so it will run. Response to The Lancet editorial of 18 February 2011.
Susanna Agardy


The modest and oversold results of the PACE study demand that the physical abnormalities of ME/CFS be urgently confronted by the medical establishment. Even the loosely defined, highly selected, less debilitated sample of participants reached their limits following the ‘star’ CBT and GET interventions.

People with properly defined ME/CFS with characteristic post-exertional malaise and lower tolerance of exercise would have sent improvement scores crashing and drop-out rates soaring. Such people were excluded and would have in any case refused to take part in PACE.

For me, a severely disabled sufferer, a few extra metres of walking triggers a bewildering array of symptoms, the third day of such exertion being the worst and needing about six days for any improvement. I wish I was just fatigued! GET treatment is not safe for us and must not be recklessly forced on us.

The word ‘recovery’ has been given an abstract statistical definition and gives the media licence to make further exaggerations. There is no report of how many people who could not previously work returned to work, for example.

Dr Bleijenberg casts a euphoric glow over the results. His speculation about supposed CBT-related mechanisms of change sound to sufferers as appropriate as saying: ‘All we need is a better whip to flog a dead horse with, so it will run.’

It is time The Lancet regularly published and encouraged scientific research on the many serious physical abnormalities in this maligned illness. That would help patients.
 

oceanblue

Guest
Messages
1,383
Location
UK
Since I haven't been able to tempt anyone into making a fool of themselves, I guess I will have to risk exposing my cognitively-impaired dyscalculia to ridicule.

My arguments about outliers were meant to suggest a major problem with the mathematical models in use. One problem is that those beyond about 3 SD below the mean of the assumed distribution would have negative scores for physical activity. Tabloid headlines like "Study Proves UK has 80,600 Zombies!" could result. (I got this by a quick look at a table for the complementary cumulative distribution function, plus a guess at the current UK population. My remembered values for both gave results within an order of magnitude. Detailed calculations on meaningless data are a waste of time.)

On the more rational side of responses, we see that the model must break down before reaching 3 SD below the mean. Since the study groups were between 1 SD and a little over 2 SD, the question of exactly when, where and how the model breaks down is germane to questions about interpretation of published results.
Like most people, this is probably beyond my understanding, but while trying to get to grips with biostatistics (studied with a textbook chosen expressly because it promised jokes) I came across this, that might be relevant:
If the population of all subscribers to the magazine were normal, you would expect its sampling distribution of means to be normal as well. But what if the population were non-normal? The Central Limit Theorem states that even if a population distribution is strongly non-normal, its sampling distribution of means will be approximately normal for large sample sizes (over 30). The Central Limit Theorem makes it possible to use probabilities associated with the normal curve to answer questions about the means of sufficiently large samples.
But since I'm out of my depth, it might not be relevant too. However, I thought your contributions deserved some reply; I do try to read your posts but, apart from the good gags, I don't really grasp them.
 

oceanblue

Guest
Messages
1,383
Location
UK
"oceanblue wrote: When Bleijenberg & Knoop said PACE had used a 'strict definition of recovery' it was because they incorrectly thought that PACE has used a healthy population to define 'within the norm'. Which is pretty unimpressive in an editorial. "

Indeed. Bad peer review? The editorial even used the word "recovery", not "normal" as PACE does. A misunderstanding or typo on their behalf? Or a lack of fact checking after a naive assumption that the PACE authors would stick to the protocol? Wishful thinking? Public relations or spin from comrades?
Apparently comment pieces are not normally peer reviewed. I think B&K made a genuine mistake, which goes to show how deceptive PACE were.

In fact, Knoop did a study on recovery which appears to be the origin of the 'within 1 SD of the mean' formula. However, this study explicitly applied the forumla to a healthy population defined as general pop excluding those who reported a long-term health issue. This gave a reasonable SF36 threshold of 80. B&K assumed PACE did the same thing, but PACE just used a general population, including the sick, giving a threshold of 60. Sneaky, eh?

In case you think it was a simple 'mistake' by PACE to use the wrong population, it wasn't: Peter White co-authored that Knoop study which had explicity used a healthy population.
 

oceanblue

Guest
Messages
1,383
Location
UK
Problem with self-reports acknowledged in a CBT study on MS

An additional limitation was that outcome assessment in
this study depended on self-rated outcome measures. No ob-
jective measures exist for subjectively experienced fatigue, so
we chose reproducible measures that are sensitive to change.
However, self-reports are amenable to response bias and so-
cial desirability effects.
Future studies could also assess more
objective measures of change such as increases in activity
levels and sleep/wake patterns using actigraphs or mental
fatigue using reaction time tasks.

How refreshingly honest.

from A Randomized Controlled Trial of Cognitive Behavior Therapy for Multiple Sclerosis Fatigue
 

Dolphin

Senior Member
Messages
17,567
Letters sought in reply to PACE Trial article in a free newspaper for Irish doctors

If anyone wants to send in a reply to this, it'd be appreciated.
It was included in a free newspaper for Irish doctors.


Last year, following publishing a piece on the Santhouse et al. editorial in the British Medical Journal, they published not one but five letters over a series of weeks (John Greensmith, Tom Kindlon, Gerwyn Morris, Orla Ni Chomhrai & Vance Spence (only two with Irish addresses) - that was most of the people who wrote in, as I recall.
They may be glad to fill up space in their newspaper.


People can also put comments online but letters would be preferred. You can always post your letter as a comment if you prefer.


If you sent in a letter to the Lancet, you could get a chance to re-use it (ordinary newspapers might find it too technical). Probably best to not put the references underneath - just put the name of the first author + et al. + year in brackets e.g. (White et al., 2011) to refer to Lancet paper. If you want me to look at it, feel free.

References aren't essential of course.

Even if your point doesn't relate to what is in the Irish Medical Times article, one can still criticise the study.


Probably best to keep letters under 400 words and ideally less than that again.
Address is: editor@imt.ie that's editor @ imt.ie


Don't forget to put your address in the letter and also a telephone number (which won't be published).


Thanks



http://bit.ly/hAvLon
i.e.
http://www.imt.ie/clinical/2011/03/cognitive-behavioural-therapy-not-harmful
-in-chronic-fatigue.html

You are here: Home / Clinical times / Cognitive behavioural therapy not harmful in chronic fatigue

Cognitive behavioural therapy not harmful in chronic fatigue

March 18, 2011 By admin 1 Comment

Patient groups’ concerns that cognitive behavioural therapy (CBT) and graded exercise therapy could be harmful for the treatment of chronic fatigue syndrome can be allayed due to a large study showing that both are effective and safe.


But the randomised PACE trial of nearly 650 patients did find that adaptive pacing therapy (APT) – a therapy sometimes favoured by patient groups – was not more helpful in reducing fatigue or physical function than specialist medical care alone (SMC), contrary to the researchers’ initial hypothesis.

The British researchers randomised 160 people to each of the four treatment
groups: CBT, GET or APT combined with specialist medical care, and a final group with specialist medical care only.

GET was based on “deconditioning and exercise intolerance theories of chronic fatigue” and consisted of negotiated, gradual increases in exercise intensity over the period of intervention. APT was based on the “envelope theory of chronic fatigue” and consisted of identifying links between activity and fatigue followed by a plan to avoid exacerbations.

Before treatment began, patient expectations were high for both APT and GET but lower for CBT and SMC, the researchers reported.

Those treated with CBT or GET in combination with SMC did better with respect to both primary outcomes — fatigue, measured on the Chalder fatigue questionnaire and physical function, measured on the short form-36 physical function subscale.

The researchers concluded that both treatments were effective for chronic fatigue with “moderate” effect sizes. They suggested that the lack of benefit for APT combine with SMC could have been a result of the greater than expected improvement with SMC alone.

There were no more adverse reactions to the behavioural interventions than specialist care alone, a finding that was important according to two researchers from the Expert Centre for Chronic Fatigue in the Netherlands.

“This finding is important and should be communicated to patients to dispel unnecessary concerns about the possible detrimental effects of cognitive behaviour therapy and graded exercise therapy, which will hopefully be a useful reminder of the potential positive effects of both interventions,”
they wrote in an accompanying editorial.

Lancet 2011; Online. doi:10.1016/S0140-6736(11)60096-2
 

anciendaze

Senior Member
Messages
1,841
central limit theorem

With the invocation of the central limit theorem, we are now officially in deep water.

Generally, the CLT is used in the opposite direction from the way they are going. If you have large numbers of IID processes (Independent Identically-Distributed), these may result in an overall process with normal distribution even if the individual process distributions are very different. Their individual distributions need a few properties, like the existence of a mean, and unbounded deviations. They must also be truly identical and statistically independent. The behavior of large numbers of identical electrons or molecules can give rise to normal distributions in this way. We are talking about a single distribution for the entire UK population which is demonstrably non-normal.

The universe of samplings talked about here would have to be quite large. I'm sure researchers would appreciate a chance to run 10^9 5M pound studies. :D
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
SF-36 : Norms, skewness and 'top box' scores in a surgical population.

I'd completely forgotten about this paper and apologies if it has already been posted.

It might be some help with letters :

"A review2 of surgical QoL studies has
found that there were several deficiencies
in the conduct of these studies. One of the
most common problems was inappropriate
statistical analysis. The proper statistical
analysis of data is essential in interpreting
the results of any study.3 Commonly,
data from the SF-36 have been presented as
means with standard deviations or standard
errors of the mean. The basic assumption
of these studies is that the data follow
a normal (gaussian) distribution, having a
bell-shaped curve. However, many of
these studies did not perform the statistical
tests4 needed to determine if, indeed, the
data follow the normal distribution necessary
to use this type of statistical analysis."



"Conclusions: The SF-36 data did not follow a normal
distribution in any of the domains. Data were always
skewed to the left, with means, medians, and modes different.
These data need to be statistically analyzed using
nonparametric techniques. Of the 8 domains, 5 had a significant
frequency of top-box scores, which also were the
domains in which the mode was at 100, implying that
change in top-box score may be an informative method
of presenting change in SF-36 data"

http://archsurg.ama-assn.org/cgi/reprint/142/5/473.pdf
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
There's now a 'web appendix' published on the Lancet which i haven't seen before...

It lists the nature of all the 'serious adverse advents' many of which, as the authors state, don't seem to be related to ME or the treatments...

See Page 5:
http://download.thelancet.com/mmcs/...b72946c:606a418:12ecddf56bf:20861300538790422
http://download.thelancet.com/mmcs/...72946c:606a418:12ecddf56bf:-35b81300536269641
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60096-2/fulltext#sec1
(One of these links might work, but I think you have to be logged into the Lancet.)

Some of the serious adverse events listed under the categories 'Inpatient investigation' and 'Increase in severe and persistent significant disability/incapacity' could be ME related, but otherwise they all seem unrelated.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
There's now a 'web appendix' published on the Lancet which i haven't seen before...

It lists the nature of all the 'serious adverse advents' many of which, as the authors state, don't seem to be related to ME or the treatments...

See Page 5:
http://download.thelancet.com/mmcs/...72946c:606a418:12ecddf56bf:-35b81300536269641

Some of the serious adverse events listed under the categories 'Inpatient investigation' and 'Increase in severe and persistent significant disability/incapacity' could be ME related, but otherwise they all seem unrelated.

Problem is, devil is the detail (or lack of thereof).

If we take as given (as i think we should, at least when considering this issue - think even of Peter White's dept denying that bowel problems are part of 'CFS/ME' in the NICE guidelines comments, for example), then these authors are ignoring neurological ME-related problems (they don't believe in the ME with neurological features, Canadian defined ME etc.).

Caveat about the efficient process of exclusion of neurological ME patients not withstanding, IF ANY actual ME (or other misdiagnosed 'fatigue') patients were in the mix, increase in disability might well be a result of 'treatment' upon abnormal response to increasing exertion etc.

This is fact goes to the crux of the matter. They are blanket-claiming CBT/GET as safe for people with (even neurological) ME, against the evidence it is contraindicated (which they don't adequately address, of course).

Adverse outcome details should have been made explicit, and just writing vague descriptions like these, and then saying something like "indepdent investigators thought they were nothing to do with our treatment" which is basically what they've done, is really not good enough, and should have been picked up at peer review.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Problem is, devil is the detail (or lack of thereof).

If we take as given (as i think we should, at least when considering this issue - think even of Peter White's dept denying that bowel problems are part of 'CFS/ME' in the NICE guidelines comments, for example), then these authors are ignoring neurological ME-related problems (they don't believe in the ME with neurological features, Canadian defined ME etc.).

Caveat about the efficient process of exclusion of neurological ME patients not withstanding, IF ANY actual ME (or other misdiagnosed 'fatigue') patients were in the mix, increase in disability might well be a result of 'treatment' upon abnormal response to increasing exertion etc.

This is fact goes to the crux of the matter. They are blanket-claiming CBT/GET as safe for people with (even neurological) ME, against the evidence it is contraindicated (which they don't adequately address, of course).

Adverse outcome details should have been made explicit, and just writing vague descriptions like these, and then saying something like "indepdent investigators thought they were nothing to do with our treatment" which is basically what they've done, is really not good enough, and should have been picked up at peer review.

Yes, you have a good point Angela...

But it would be hard for us to categorise many of the events, such as surgery, hip replacements, accidental head injury and allergic reaction to bites, as being directly related to the treatment, although they could easily be ME related.

Events like the 'head injury' and 'hip replacement' could be due to a patient falling over as a direct result of a weak body over-doing the GET. And pregnancy complications could be ME related due to a flare-up after GET. So, yes, the devil is in the detail, and we won't ever know what the exact details are.

Some of the events listed could obviously be directly due to treatment related flare ups (i.e. blackouts, chest pain, "acutely unwell", epileptic seizure, "Investigation of headache" and chest infection etc). And none (?) of these have been acknowledged as related to the treatments.
 

anciendaze

Senior Member
Messages
1,841
SF-36 : Norms, skewness and 'top box' scores in a surgical population.

I'd completely forgotten about this paper and apologies if it has already been posted.

It might be some help with letters :

"A review2 of surgical QoL studies has
found that there were several deficiencies
in the conduct of these studies. One of the
most common problems was inappropriate
statistical analysis. The proper statistical
analysis of data is essential in interpreting
the results of any study.3 Commonly,
data from the SF-36 have been presented as
means with standard deviations or standard
errors of the mean. The basic assumption
of these studies is that the data follow
a normal (gaussian) distribution, having a
bell-shaped curve. However, many of
these studies did not perform the statistical
tests4 needed to determine if, indeed, the
data follow the normal distribution necessary
to use this type of statistical analysis."...
This fellow does know what he is talking about, but lacks the collection of merit badges required for weight of authority. Top box score analyses are not accepted standards in any field I'm aware of, however. As a comment suggested, there are many other non-parametric alternatives.

Unfortunately, the number of studies with the same fundamental flaw is large enough for incompetent researchers to outvote objectors. Also, consider that these studies were based on surgery, associated with presumption of organic causation. Had they reviewed psychological literature the state of the art would have been considerably worse.

Even after his presentation, we have one response to the talk indicating someone (McCarthy) fully intends to keep doing what he has been doing. There is no apparent awareness that the behavior he claims to see in data could be the result of the sampling process instead of the population being sampled.

Meanwhile, someone in parliament should ask what will the UK do about 80,000 predicted zombies.:angel:
 
Back