• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

From Nijmegen new one

Mithriel

Senior Member
Messages
690
Location
Scotland
In the abstract they say
Data from 3 randomised controlled trials on CBT for CFS were pooled and reanalysed.

but in the press release they say
To answer this question, the Nijmegen Centre Chronic Fatigue of the UMC St Radboud in Nijmegen, did a large survey of more than five hundred patients with CFS.
which gives the impression they did a study specifically designed to answer the question of whether deterioration happened.

We know that some of the fatigue scales used are not sensitive enough for detecting deterioration. Metanalysis is not good enough for such an important question. This is more publishing on the cheap.

And using the Fukuda defintion but leaving out the symptoms thing would be laughed at rather than published in any other disease.

Mithriel
 
K

_Kim_

Guest
:Sign giggle:
Dorktor Kim
images
 

Doogle

Senior Member
Messages
200
Every time I see this group I discount their conclusions because they have previously redefined CFS as simple fatigue in their CBT studies .

Cognitive behaviour therapy for chronic fatigue syndrome: a multicentre randomised controlled trial, Judith B Prins, Gijs Bleijenberg, Ellen Bazelm Philip Spinhoven, Jos W M van der Meer, Lancet 2001; 357: 84147

"Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS,(Fukada 1994) with the exception of the criterion requiring four of eight additional symptoms to be present."

Pure manipulation and dishonesty.

Let's put this in perspective for the sake of civility discussion going on another thread. When Mikovitz or Peterson gets criticized for what they claim/do, people get bent out of shape. Yet, it goes without fuss when someone calls a valid study a pure manipulation and dishonesty. This goes to the heart of this forums credibility. Is this a neutral place that any criticisms about any study/claim/public figure are allowed? Or, does the civility only applies when it is against viralist agenda?

PoetInSF, do you think their assertion that their study is about CFS (Fukada definition), when patients are eligible to enroll with only Chronic Fatigue is valid, honest, and not manipulative? The Fukada definition specifically states their study is not about CFS but instead Idiopathic Chronic Fatigue.

"A case of idiopathic chronic fatigue is defined as clinically evaluated, unexplained chronic fatigue that fails to meet criteria for the chronic fatigue syndrome. The reasons for failing to meet the criteria should be specified."

A study that uses ICF patents and then names it a CFS study contrary to the definition they claim to be using is not valid, and is manipulative and dishonest IMO.
 

Dolphin

Senior Member
Messages
17,567
I've sent in a letter on this. Fingers crossed they publish it. It'd be great if other people wrote in too but it's not easy to do a write a letter that will be published - probably easiest to work up to doing that by sending in e-letters where journals allow them as the standard is lower there.

-------
I was corresponding with somebody who shared their thoughts on the paper. They say I could share them.

I just skimmed the article so some preliminary thoughts:

1. When combining three different trials with different populations and different interventions, one question would be is it appropriate to combine them even?

2. The drop-out rate is high. It's about 28-29% across the board based on Table 2. Although the researchers might have appropriately defaulted statistically these subject as "unimproved" in their analyses of the scales, not providing the reasons for dropouts means we might be missing reasons not picked up on these scales so I wonder if the original studies listed reasons. This is considered standard reporting data for trials in many journals
.

and
Since your letter is about a paper on harms, I just found this article from 2004 from the well-respected CONSORT group, which helps standardize major journal clinical trials reporting, about the poor reporting of harm in trials. If not used for the current letter, it can be saved for later.

http://www.consort-statement.org/extensions/data/harms/
I quoted from this.

1. I think you pointing out adherence is a good tact to take. It's funny how none of the papers bring up the actigraphy results after they talk about it in methods!

2. The issue of combining the three groups is problematic in that:

- the Prins papers says that they used CDC criteria but then not the "4/8" symptoms; they explain this away by saying 18 were diagnosed with "idiopathic chronic fatigue" in one of the letters -- why didn't they just throw out this small group? It messes up the data.

Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS,1 with the exception of the criterion requiring four of eight additional symptoms to be present.

- the interventions are different; the Stule. paper has parents participating and states.
Both protocols differed from the treatment of adults.

- while the individual papers show a "baseline" chart to show that randomization suceeded, combining the three groups again should mean that they need to re-do a "baseline" chart (to be real picky, the hours worked in Prin (16-CBT vs. 12-13 other groups) and the duration of illness - between the groups in Knoop (72 months vs. 96 months) showed randomization might not have succeeded. Values should be similar.
http://www.consort-statement.org/consort-statement/13-19---results/item15_baseline-data/)

3. My main issue is that, for a paper about detrimental effects, the authors need to elucidate from the subjects WHY they withdrew, especially with a drop out rate of about 20%. Accounting for missing values statistically in the analysis by defaulting missing values to "deterioration" status is reasonable but doesn't cover reasons for withdrawal beyond what the authors hypothesized. For example, if people withdrew for worsening cognitive impairment, I am not sure those scales that they used would account for this.
(I haven't had a chance to review the different scales though.)

Also, no additional information is given about why people decided to not start CBT. Perhaps, despite the scales saying the people are smiliar on the baseline data, these folks, after hearing further about the trial after randomization, were concerned they could not tolerate it.

(From Consort)

Discontinuations and withdrawals due to adverse events are especially important because they reflect the ultimate decision of the participant and/or physician to discontinue treatment.

It is important to report participants who are nonadherent or lost to follow-up because their actions may reflect their inability to tolerate the intervention.

Passive surveillance of harms leads to fewer recorded adverse events than active surveillance.

4. In terms of using data from surveys, here's a supportive statement from CONSORT about that:

Authors should contrast the trial results on harms with other sources of information on harms, including observational data from spontaneous reporting, automated databases, case–control studies, and case reports.
(which is what surveys and patient anecdoates are! So don't let others discount the value of those surveys. They have their limits but they have a value too.)

5. They use the DOF and DOP to monitor people but how good are these scales? Sounds like something they used before but how well is it tested (validity, reliability, etc.)? I have to do some more reading on scales.

6. Also, in future papers, watch for how they handle missing data (people who drop out). In the Stule. and Knoop paper, they "carried forward the last observation." which means they assumed the person did not change in their rating for fatigue, etc.
This could be challenged since the person might have deteriorated instead.

In the Heins paper supposedly they corrected this with fancy statistical analysis but in the end, they did not show their data, which could also be challenged.

The data of the 3 RCT were pooled to increase statistical power.
Numbers were calculated on an intention-to-treat basis, and
missing values on the postmeasurement were replaced with estimates
derived from single imputation (missing variable analysis,
regression with baseline value as predictor) [32] . Missing data in
categorical variables were not replaced
.
 

oceanblue

Guest
Messages
1,383
Location
UK
4. In terms of using data from surveys, here's a supportive statement from CONSORT about that:

Authors should contrast the trial results on harms with other sources of information on harms, including observational data from spontaneous reporting, automated databases, case–control studies, and case reports.
(which is what surveys and patient anecdoates are! So don't let others discount the value of those surveys. They have their limits but they have a value too.)

I'd strongly endorse this statement from your correspondent. While there are clear limitations from such non-random samples, the patient numbers in these surveys are enormous compared with those in published trials and so are a valuable source of information. CONSORT endorsement of this type of information means that researchers can no longer credibly dismiss such surveys out of hand.

Good luck with your letter.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
See also:
"How does cognitive behaviour therapy reduce fatigue in patients with chronic fatigue syndrome? The role of physical activity". http://www.ncbi.nlm.nih.gov/pubmed/20047707

Discussion

The data did not support a treatment model in which the effect of CBT on fatigue is mediated by an increase in physical activity. CBT did neither cause an increase in physical activity at the end of treatment (path a) nor was an increase in physical activity associated with a reduction in fatigue (path b). A formal test of the mediation effect confirmed that CBT yielded its effect independent of a persistent change in physical activity.

These results are in line with the study of Moss-Morris et al. (2005) in which it was demonstrated that not an increase in fitness but a change in preoccupation with symptoms mediated the effect of GET on fatigue. The results are also consistent with earlier research on CBT for CFS in which a reduction in fatigue was associated with a change in illness beliefs (Deale et al. 1998). In the light of these findings, changing illness-related cognitions seems to play a more crucial role in CBT for CFS than an increase in physical activity.

There are several potential alternative explanations for the fact that we did not find support for our mediation hypothesis. A substantial amount of patients did not complete actigraphy at second assessment and had to be excluded from our mediation analyses. It is possible that we introduced a bias through exclusion which might account for our findings. However, analysis of the baseline characteristics revealed that a selection bias is no likely explanation for our findings.

Our patients were not required to stick to their physical activity programme until the end of therapy. As treatment proceeded, they were allowed to substitute physical activities for other activities such as social ones. Consistently, treatment could have resulted in a temporary increase in physical activity which was no longer existent when second assessment took place. This temporary increase in physical activity during treatment might have been sufficient to facilitate a persistent change in illness-related cognitions. When patients learned that they were able to increase their level of physical activity despite their symptoms, their belief of having little control over their condition should have changed and with it also the perception of fatigue as an inherently aversive state. To examine these mechanisms of change in CBT for CFS, patients' physical activity and illness-related cognitions need to be monitored repeatedly during treatment.

Patients with a pervasively passive activity pattern have extremely low levels of physical activity. These patients do not respond to common CBT for CFS (Prins et al. 2001). A specifically tailored approach in which the physical activity programme is delivered earlier showed better effects for these patients (e.g. Stulemeijer et al. 2005). They might thus profit from a persistent increase in physical activity after all. Unfortunately, the number of patients was too small to properly examine whether a change in physical activity does mediate the effect of treatment in pervasively passive patients.

In contrast to pervasively passive patients, the majority of CFS patients is not only characterized by a low level of physical activity, but has also a deregulated pattern of physical activity in which short periods of high activity are alternated with longer periods of rest (van der Werf et al. 2000). These patients were taught to spread their activities evenly across day and week (Bleijenberg et al. 2003). Perhaps a change in activity regulation is more important to facilitate improvement in relatively active CFS patients than an increase in physical activity.

Taking these considerations into account, the exact role of physical activity in CBT for CFS remains to be determined. Besides physical activity, future investigations should also examine the role of changes in social, mental and work-related activities in CBT for CFS, preferably based on the time patients actually spend on these activities to limit perception bias. For the time being, our study was the first one to show that the severity of fatigue in patients with CFS is not reduced by CBT because patients have become more physically active at the end of their treatment. Based on these findings, physical activity programmes can better be understood as a way to facilitate change in other mechanisms which are more directly related to a change in fatigue. Among these mechanisms, a change in illness-related cognitions is likely to play a crucial role in CBT for CFS and should therefore be monitored closely during treatment.

These guys sound a little confused in their conclusions. Since they actually demonstrated that CBT doesn't actually work in terms of QUANTITATIVE results, rather than subjective questionnaires that are subject to the placebo response. Also, maybe patients with a 'pervasively passive activity pattern' are like that because they have a severe organic illness, rather than 'perception bias'. Or is that perception bias of the researchers? Now even I'm confused.
 

Dolphin

Senior Member
Messages
17,567
A reply has been published:

Psychother Psychosom. 2011 Jan 4;80(2):110-111. [Epub ahead of print]
Harms of Cognitive Behaviour Therapy Designed to Increase Activity Levels in Chronic Fatigue Syndrome: Questions Remain.
Kindlon T.

Irish ME/CFS Association, Dublin, Ireland.
Abstract
No abstract available.
Copyright 2010 S. Karger AG, Basel.

PMID: 21212715 [PubMed - as supplied by publisher]