PACE Trial and PACE Trial Protocol

anciendaze

Senior Member
Messages
1,841
According to Figure 1, only 33 people were excluded (5% of total recruits) on entry criteria of bimodal CFQ<6, as opposed to over 300 for SF36>65 (50% of total recruits). So potentially that's a very big effect on SD for SF36. On top of this is the exclusion of those too ill to participate which probably has a huge effect (though maybe not for CFQ as the ceiling effect means most of those excluded as too ill would have scored max of 33 anyway).
I think I've said before that it is entirely possible all results are due to selection effects, not treatment. We have internal evidence GET was more strongly selective in terms of adverse events and completing both 6MWTs. My assumed uniform distribution, drawn from the left tail of actual population data, was an example in which the bounds, not the population, determine the mean and "standard deviation".
 

Dolphin

Senior Member
Messages
17,567
Nice quote.

Re Top Box, I think 100/100 might be a bit too high. I've tried to estimate the data from the graph in the Bowling study - only 57% scored in the top box, and it looks like the top box is scores of 95 or 100.
View attachment 5239
Asking to see the scores of 95-100/100 following GET or CBT would be interesting.
 

anciendaze

Senior Member
Messages
1,841
A physical example of statistical reasoning and a non-normal distribution

After some frustration in trying to explain the issues in PACE to yet another person, I suddenly realized they did not have experience with any distributions except Gaussian and binomial, or any applications which used more than vague and woolly reasoning of the "maybe it is, and maybe it ain't" variety. The example below was dredged up from ancient memory. I hope it shows how you can make valid statistical arguments with good predictions. There is far more available on-line, or in textbooks. I hope this will have the concrete feel that gets lost before page 50 of most textbooks.

Assume you have done experiments which tell you air is composed of molecules, and you suspect air pressure is simply due to random impacts of many molecules. You can do a simple experiment to gain some insight into the distribution of velocities.

You inflate a small balloon, and hang it in the middle of a sealed room. After you exclude drafts and convection currents of air, the motion of the balloon becomes imperceptible. You also note that it takes a definite minimum pressure to inflate the balloon. Once it is inflated it approaches a spherical shape. At the very least, wrinkled balloons become smoother as inflated.

Together these things tell you a good bit. First, the mean of all molecular velocities striking the balloon must be close to zero, otherwise it would move. Second, there must be a definite, non-zero absolute (root-mean-squared) value for the deviation of velocities from zero. Third, the pressure of molecules inside and outside the balloon is the same in all directions (isotropic). One thing you didn't even think to question is that both room and balloon are three dimensional.

With this starting point, you can derive a distribution of velocities. The simplest possible assumption is that the component velocities along three axes at right angles will be independent, each Gaussian with the same mean (zero) and standard deviation. The assumption the 3-dimensional distribution is isotropic leads directly to derivation of a Maxwell-Boltzmann distribution for velocity magnitudes. This can be confirmed experimentally.

(Many people see "turning the crank" to get from those assumptions to the distribution as impossibly forbidding. It is less difficult than turning the crank to predict the motion of the Moon from first principles to an accuracy appropriate for observation by the mark 1 human eyeball. Many millions of people took the recent prediction of an exceptional full moon for granted.)

What can we learn from this? First, you started with very good reason to believe the distribution could be completely described with two parameters. You even had a value for one of them. Second, you did not assume the normal distribution applied directly to any possible measurement you might make. You might have trouble making separate independent measurements of component velocities of individual molecules. The magnitude of the vector velocity leads to directly observable effects like ionization or chemical reactions.

While the Maxwell-Boltzmann distribution may look kinda, sorta Gaussian for large values of standard deviation, it is not. Not every smooth, one-hump distribution is Gaussian. The influence of three dimensions is important even when component velocities are Gaussian and statistically independent. The number of dimensions matters even when they are not immediately visible. The space in which this takes place has both a particular (Euclidean) geometric structure and dynamics.

My personal prejudice about reasoning in medical and psychological research is that somebody should have learned something about the space, geometry and dynamics describing health and illness after about 200 years of measuring various things in isolation. My own visits to doctors are less than reassuring.

My doctor looks at a printed list of laboratory measurements describing my physical condition. This might refer to a space with some 20 dimensions, or a number of different spaces with smaller dimensions. Instead, each measurement seems to exist in isolation. Even when a measurement falls outside of published norms, it is often ignored. It could be laboratory error, or a meaningless variation. In some cases, doctors will send me back to the lab repeatedly to get numbers they can ignore. I suspect the mark 1 human eyeball is saying "he don't look sick".

Just as a mathematical exercise, compare the volume of a bounding box with a bounding sphere as the number of dimensions goes up. In three dimensions, the volume outside the sphere is almost equal to the volume inside. In four dimensions and above, volume outside dominates. I don't know what shape describes health, (apparently nobody does.) I do know nature seems averse to boxes with square corners. It is entirely possible the bounding box approach misses the vast majority of health trends before irreversible damage takes place. Increasing the number of lines on a page for doctors to ignore is a great way to increase costs without corresponding benefits.

If you believe I am talking up purely hypothetical concerns take a look at trends in health-care costs and admissions through emergency rooms. Is the profession getting better or worse at prevention of expensive interventions? Are realized cost savings the result of prevention, shifting burdens or denial of claims?
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
Doesn't this tell you everything you need to know about how CBT 'cures' fatigue?


"Interestingly, at the end of treatment,
the CBT group reported significantly lower levels of fatigue
compared with the healthy comparison group normative score
(t (107) 6.67; p .001). This trend for lower fatigue than
healthy participants was maintained at 3 (t (107) 4.48; p
.001) and 6 months follow-up (t (107) 2.51; p .01).

Fatigue levels for the RT group at the end of treatment were
equivalent to those of the matched healthy comparison group
(t (109) 1.32; p 0.19). At 3 months follow-up, their
fatigue was significantly less than the healthy participants
fatigue levels (t (109) 2.08; p .05), while the two
groups had similar fatigue severity scores at the last follow-up
point (t (109) 0.14; p .89)."


Unless CBT miraculously produced physiological changes making participants healthier than healthy controls?
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Unless CBT miraculously produced physiological changes making participants healthier than healthy controls?

Haven't kept up with everything, but it was suggested earlier in this thread that their definition of "healthy controls" was the most fatigued 15% of a sample of visitors to GPs' clinics. Has there been any progress in pinning that down to a precise and definitive statement of who the "healthy controls" really were?
 

Dolphin

Senior Member
Messages
17,567
After some frustration in trying to explain the issues in PACE to yet another person, I suddenly realized they did not have experience with any distributions except Gaussian and binomial, or any applications which used more than vague and woolly reasoning of the "maybe it is, and maybe it ain't" variety. The example below was dredged up from ancient memory. I hope it shows how you can make valid statistical arguments with good predictions. There is far more available on-line, or in textbooks. I hope this will have the concrete feel that gets lost before page 50 of most textbooks.

Assume you have done experiments which tell you air is composed of molecules, and you suspect air pressure is simply due to random impacts of many molecules. You can do a simple experiment to gain some insight into the distribution of velocities.

You inflate a small balloon, and hang it in the middle of a sealed room. After you exclude drafts and convection currents of air, the motion of the balloon becomes imperceptible. You also note that it takes a definite minimum pressure to inflate the balloon. Once it is inflated it approaches a spherical shape. At the very least, wrinkled balloons become smoother as inflated.

Together these things tell you a good bit. First, the mean of all molecular velocities striking the balloon must be close to zero, otherwise it would move. Second, there must be a definite, non-zero absolute (root-mean-squared) value for the deviation of velocities from zero. Third, the pressure of molecules inside and outside the balloon is the same in all directions (isotropic). One thing you didn't even think to question is that both room and balloon are three dimensional.

With this starting point, you can derive a distribution of velocities. The simplest possible assumption is that the component velocities along three axes at right angles will be independent, each Gaussian with the same mean (zero) and standard deviation. The assumption the 3-dimensional distribution is isotropic leads directly to derivation of a Maxwell-Boltzmann distribution for velocity magnitudes. This can be confirmed experimentally.

(Many people see "turning the crank" to get from those assumptions to the distribution as impossibly forbidding. It is less difficult than turning the crank to predict the motion of the Moon from first principles to an accuracy appropriate for observation by the mark 1 human eyeball. Many millions of people took the recent prediction of an exceptional full moon for granted.)

What can we learn from this? First, you started with very good reason to believe the distribution could be completely described with two parameters. You even had a value for one of them. Second, you did not assume the normal distribution applied directly to any possible measurement you might make. You might have trouble making separate independent measurements of component velocities of individual molecules. The magnitude of the vector velocity leads to directly observable effects like ionization or chemical reactions.

While the Maxwell-Boltzmann distribution may look kinda, sorta Gaussian for large values of standard deviation, it is not. Not every smooth, one-hump distribution is Gaussian. The influence of three dimensions is important even when component velocities are Gaussian and statistically independent. The number of dimensions matters even when they are not immediately visible. The space in which this takes place has both a particular (Euclidean) geometric structure and dynamics.

My personal prejudice about reasoning in medical and psychological research is that somebody should have learned something about the space, geometry and dynamics describing health and illness after about 200 years of measuring various things in isolation. My own visits to doctors are less than reassuring.

My doctor looks at a printed list of laboratory measurements describing my physical condition. This might refer to a space with some 20 dimensions, or a number of different spaces with smaller dimensions. Instead, each measurement seems to exist in isolation. Even when a measurement falls outside of published norms, it is often ignored. It could be laboratory error, or a meaningless variation. In some cases, doctors will send me back to the lab repeatedly to get numbers they can ignore. I suspect the mark 1 human eyeball is saying "he don't look sick".

Just as a mathematical exercise, compare the volume of a bounding box with a bounding sphere as the number of dimensions goes up. In three dimensions, the volume outside the sphere is almost equal to the volume inside. In four dimensions and above, volume outside dominates. I don't know what shape describes health, (apparently nobody does.) I do know nature seems averse to boxes with square corners. It is entirely possible the bounding box approach misses the vast majority of health trends before irreversible damage takes place. Increasing the number of lines on a page for doctors to ignore is a great way to increase costs without corresponding benefits.

If you believe I am talking up purely hypothetical concerns take a look at trends in health-care costs and admissions through emergency rooms. Is the profession getting better or worse at prevention of expensive interventions? Are realized cost savings the result of prevention, shifting burdens or denial of claims?
Thanks for taking the time to write that.

Like a lot of people with ME/CFS, long pieces tire me so I didn't read it closely.

I think there may be easier ways to say that non-Gaussian distributions exist. I've studied mathematics in college for 2 years before having to drop out due to severe ME/CFS [the equivalent to doing a joint degree of mathematics and another subject - I did two full years of Mathematics (really Mathematical Science) where people doing Mathematics and another subject do half a year at a time. The system in not like in the US where people do subjects outside their area]. I came across neither "comparisons between the volume of a bounding box with a bounding sphere as the number of dimensions goes up" nor "Maxwell-Boltzmann distribution".
 

Dolphin

Senior Member
Messages
17,567
Doesn't this tell you everything you need to know about how CBT 'cures' fatigue?


"Interestingly, at the end of treatment,
the CBT group reported significantly lower levels of fatigue
compared with the healthy comparison group normative score
(t (107) 6.67; p .001). This trend for lower fatigue than
healthy participants was maintained at 3 (t (107) 4.48; p
.001) and 6 months follow-up (t (107) 2.51; p .01).

Fatigue levels for the RT group at the end of treatment were
equivalent to those of the matched healthy comparison group
(t (109) 1.32; p 0.19). At 3 months follow-up, their
fatigue was significantly less than the healthy participants
fatigue levels (t (109) 2.08; p .05), while the two
groups had similar fatigue severity scores at the last follow-up
point (t (109) 0.14; p .89)."


Unless CBT miraculously produced physiological changes making participants healthier than healthy controls?
In case people are confused, Marco is not referring to the PACE Trial but to this trial:
van Kessel K, Moss-Morris R, Willoughby E, Chalder T, Johnson MH, Robinson E.
A randomized controlled trial of cognitive behavior therapy for multiple sclerosis fatigue. Psychosom Med. 2008 Feb;70(2):205-13. Epub 2008 Feb 6.

Thus the point is that people who have done CBT may artifically rate their fatigue levels as lower than they really are (because it's unlike people with MS actually have lower fatigue levels than a comparison group with a similar age and gender profile).
 

Dolphin

Senior Member
Messages
17,567
Haven't kept up with everything, but it was suggested earlier in this thread that their definition of "healthy controls" was the most fatigued 15% of a sample of visitors to GPs' clinics. Has there been any progress in pinning that down to a precise and definitive statement of who the "healthy controls" really were?
As I pointed out above, Marco wasn't referring to comments from the PACE Trial.

In the PACE Trial paper what they said was:
This range was defined as less than the mean plus 1 SD scores of adult attendees to UK general practice of 142 (+46) for fatigue (score of 18 or less) and equal to or above the mean minus 1 SD scores of the UK working age population of 84 (–24) for physical function (score of 60 or more).32,33
so they didn't use the word healthy.

With regard the physical function scale, those figures do not like the figures for those of working age but rather the general population including plenty of sick people of working age and also older people.

With regard to the fatigue scale, the questionnaire wasn't asked on attendance at the GP - they answered this at home. However, it wasn't restricted to a "healthy" sample and indeed if you didn't attend your GP in the previous 12 months, you weren't included. The mean was higher than the figures for Norway for the general population and particularly for healthy groups.
 

anciendaze

Senior Member
Messages
1,841
...I think there may be easier ways to say that non-Gaussian distributions exist. I've studied mathematics in college for 2 years before having to drop out due to severe ME/CFS [the equivalent to doing a joint degree of mathematics and another subject - I did two full years of Mathematics (really Mathematical Science) where people doing Mathematics and another subject do half a year at a time. The system in not like in the US where people do subjects outside their area]. I came across neither "comparisons between the volume of a bounding box with a bounding sphere as the number of dimensions goes up" nor "Maxwell-Boltzmann distribution".
In one of my areas of specialization these things are routine. I'm sorry that tired you. I worked to avoid equations and stick to fairly concrete observations.

My point was not simply that non-normal distributions exist, but that they are as ubiquitous as the molecules of air you breathe. A more subtle point is that assumptions of normal distributions depend on the dimension, geometry and dynamics of the space in which your measurements take place. Many biological processes behave as if they have fractal (non-integral) dimension. When you are doing research you should be alert to the possibility you are not "in Kansas anymore". (To quote Dorothy from the Wizard of Oz.) Overlooking a major departure from the assumed distribution is not good science.

You can go from data points determined by a one-dimensional random walk to a common Gaussian distribution. Two and three dimensional random walks also exist, with corresponding Gaussian distributions. These can have surprising implications.

Photons released by fusion near the center of the Sun go through a three-dimensional random walk until they are about half-way to the surface. (After that energy moves up via convection at roughly a normal walking pace of a meter per second or more.) Each random step is short, because the plasma there has about the optical density of tar. However, each step moves at the speed of light. How long does it take energy released at the center of the Sun to reach the surface we see? On the order of one million years.

This should tell you just how badly your intuition about random processes can mislead you when you haven't any idea what is going on behind measurements.
 

oceanblue

Guest
Messages
1,383
Location
UK
Marco Dolphin
"Interestingly, at the end of treatment,
the CBT group reported significantly lower levels of fatigue
compared with the healthy comparison group normative score
(t (107) 6.67; p .001). This trend for lower fatigue than
healthy participants was maintained at 3 (t (107) 4.48; p
.001) and 6 months follow-up (t (107) 2.51; p .01).
Bit off-topic as this refers to the Chalder/Moss-Morriss study of MS but I'm pretty sure the reason for the CBT group scoring less fatigued than healthy is that they were confused by the Chalder questionnaire: it normally asks people how they are relative to 4 weeks ago but with a chronic illness (as in the PACE trial) participants are supposed to score relative to their health before they got ill. It looks to me that the scores below 11 indicate particpants are saying they have less fatigue than 4 weeks earlier, not less fatigue than before they got ill.

If that's the case it's a bit of a blunder by the researchers and reviewers. Surely not?
 

Dolphin

Senior Member
Messages
17,567
In one of my areas of specialization these things are routine. I'm sorry that tired you. I worked to avoid equations and stick to fairly concrete observations.
No need to apologise to me. I have found some of the things you have said of interest e.g. random walks, and if time allows, I will read back on what you said in that post.

However, my point is simply that people can learn about distributions with simpler examples in more straightforward ways. For example, I did a year of mathematical physics but didn't come across the term "isotropic" in either that course or probability & statistics course. I'm trying to tell other people not to give up even if they don't understand this example.
 

Dolphin

Senior Member
Messages
17,567
Bit off-topic as this refers to the Chalder/Moss-Morriss study of MS but I'm pretty sure the reason for the CBT group scoring less fatigued than healthy is that they were confused by the Chalder questionnaire: it normally asks people how they are relative to 4 weeks ago but with a chronic illness (as in the PACE trial) participants are supposed to score relative to their health before they got ill. It looks to me that the scores below 11 indicate particpants are saying they have less fatigue than 4 weeks earlier, not less fatigue than before they got ill.

If that's the case it's a bit of a blunder by the researchers and reviewers. Surely not?
I'm not sure if we can assume that. The scores were under 11 at follow-up of 3 months and 6 months i.e. they would then be looking back at how they were at 2 months and 5 months following treatment as their comparison (see Table 3 or Figure 2).
 

oceanblue

Guest
Messages
1,383
Location
UK
I'm not sure if we can assume that. The scores were under 11 at follow-up of 3 months and 6 months i.e. they would then be looking back at how they were at 2 months and 5 months following treatment as their comparison (see Table 3 or Figure 2).
Or it could be that they were still improving, but still have more fatigue than before they got ill. Either way, it's a very surprising finding.
 

Dolphin

Senior Member
Messages
17,567
Or it could be that they were still improving, but still have more fatigue than before they got ill. Either way, it's a very surprising finding.
Initially I thought you hadn't re-read the data that the scores were worse at 3 and 6 months than immediately after treatment (but still less than 11). However I see now that they could be doing it (in which case the Chalder scale is a distaster - i.e. people's scores could appear to be getting worse even though they are improving. (Of course theoretically it could be going up and down in between the measurement points).
 

oceanblue

Guest
Messages
1,383
Location
UK
Successful treatment of fatigue in CFS should not be directed only at encouraging patients to increase activity levels but, in addition, particular attention should be paid to the cognitive processes that lie at the root of their physical inactivity: attributing complaints to a physical cause - and believeing that activity is harmful and leads to fatigue.

This came from a 1997 paper by Vercoulen (and Bleijenberg) that articulates the biopsychosocial view of CFS rather well. It seems very appropriate to PACE which draws on the same theory for its CBT model, yet doesn't deliver on the results given that it has delivered honed therapy for tackling exactly these 'flawed' cognitions.

Incidentally, this paper has interesting evidence that self-report questionnaires are not an accurate way of measuring patient activity levels. Full text (pdf)
 

oceanblue

Guest
Messages
1,383
Location
UK
Apologies, I wasn't clear - the scores were worse at 3 and 6 months than immediately after treatment (but still less than 11). (Of course theoretically it could be going up and down).
Just looked at that graph again: these results would fit with patients initially seeing an improvement and then as they become more stable moving back towards the 'no change' of 11 ie still scorig relative to a month earlier, not pre-illness. I think that's proably a more likely interpretation than people saying their fatigue was in fact less than before they got ill (not sure even the most ardent CBTers would expect that, but who knows).
 

Dolphin

Senior Member
Messages
17,567
Just looked at that graph again: these results would fit with patients initially seeing an improvement and then as they become more stable moving back towards the 'no change' of 11 ie still scorig relative to a month earlier, not pre-illness. I think that's proably a more likely interpretation than people saying their fatigue was in fact less than before they got ill (not sure even the most ardent CBTers would expect that, but who knows).
I edited that post when you were writing - it is actually possible that your initial point is correct and they were improving at each step! Doesn't look like that from the figure.

However, I'm afraid I remain to be convinced by your suggestion that they answered on the last month rather than the impression of pre-illlness fatigue.
 

Dolphin

Senior Member
Messages
17,567
Successful treatment of fatigue in CFS should not be directed only at encouraging patients to increase activity levels but, in addition, particular attention should be paid to the cognitive processes that lie at the root of their physical inactivity: attributing complaints to a physical cause - and believeing that activity is harmful and leads to fatigue.

This came from a 1997 paper by Vercoulen (and Bleijenberg) that articulates the biopsychosocial view of CFS rather well. It seems very appropriate to PACE which draws on the same theory for its CBT model, yet doesn't deliver on the results given that it has delivered honed therapy for tackling exactly these 'flawed' cognitions.

Incidentally, this paper has interesting evidence that self-report questionnaires are not an accurate way of measuring patient activity levels. Full text (pdf)
That finding was mentioning in this comment on the PACE Trial (6th para on):
http://www.biomedcentral.com/1471-2377/7/6/comments#333618
Further evidence showing why objective measures are preferable in CFS trials particularly where cognitions could be changed following the intervention

Since writing my previous posts, further data on the subject has come to my attention.

Friedberg and Sohl [1] have just published the results of a study on an intervention involving Cognitive Behavior Therapy (CBT) which included encouraging patients for going for longer walks. It found that on the SF-36 Physical Functioning (PF) scale, patients improved from a pre-treatment mean (SD) of 49.44 (25.19) to 58.18 (26.48) post-treatment, equivalent to a Cohen's d value of 0.35. On the Fatigue Severity Scale (FSS), the improvement as measured by the cohen's d value was even great (0.78) from an initial pre-treatment mean (SD) of 5.93 (0.93) to a 5.20 (0.95) post-treatment.

However on actigraphy there was actually a numerical decrease from a pre-treatment mean (SD) of 224696.90 (158389.64) to 203916.67 (122585.92) post-treatment (cohen's d: -0.13). So just because patients report lower fatigue and better scores on the SF-36 PF scale, doesn't mean they're doing more, which is what GET and CBT based on GET claim to bring about. These results seem particularly pertinent for this study given the primary outcome measures are the SF-36 PF scale and a fatigue scale.

Further reading show that another study[2], published over a decade ago, showed the problem of using self-report data in CFS patients. The authors' rationale for the study was: "It is not clear whether subjective accounts of physical activity level adequately reflect the actual level of physical activity. Therefore the primary aims of the present study were to assess actual activity level in patients with CFS to validate claims of lower levels of physical activity and to validate the reported relationship between fatigue and activity level that was found on self-report questionnaires. In addition, we evaluated whether physical activity level adequately can be assessed by self-report measures. An Accelerometer was used as a reference for actual level of physical activity.". The authors reported on the correlations on 7 outcome measures in relation to the actometer readings: "none of the self-report questionnaires had strong correlations with the Actometer. Thus, self-report questionnaires are no perfect parallel tests for the Actometer."

Prof. White seems to be aware of the findings of this study as he has been co-authored at least two papers [3,4] which quoted the findings. One of the times this paper was referenced even shows the problem I'm highlighting e.g. "support for this explanation comes from investigations that have described discrepancies between subjectively reported impairments and objective measures of activity" [4].

The authors of the 1997 study[2] pointed out that "The subjective instruments do not measure actual behaviour. Responses on these instruments appear to be an expression of the patients' views about activity and may be biased by cognitions concerning illness and disability." This was re-iterated in another paper[5]: "In earlier studies of our research group, actual motor activity has been recorded with an ankle-worn motion-sensing device (actometer) in conjunction with self-report measures of physical activity. The data of these studies suggest that self-report measures of activity reflect the patients' view about their physical activity and may have been biased by cognitions concerning illness and disability."

A corollary of the last statement is that reports of improvement in self-report measures in interventions which change "cognitions concerning illness and disability" may not be reliable. "Improvements" in self-report measures may simply show that patients have changed their cognitions with regard to how they view their illness, disability, symptoms, etc rather than actually representing improvements in activity levels and functional capacity.

Thus, I would suggest that actometers should be used whenever possible in CFS trials where one is investigating whether an intervention has brought about increased activity.

It is also interesting to note that in the large Van der Werf (2000) study[5], which involved 277 CFS patients (and 47 healthy controls), the authors divided the patients up "pervasively passive" (representing 24% of the patients), "moderately active" and "pervasively active". They found that "levels of daily experienced fatigue and psychological distress were equal for the three types of activity patterns". So one can't necessarily tell how active a patient is from the fatigue levels they report.

Incidentally they also "there were no significant group, gender or interaction effects for the number of absolute large or relatively large day-to-day fluctuations (Table 2 and Table 3)." "The day-to-day fluctuation measures were based on somewhat arbitrary criteria (1 S.D. and 33% activity change). However, when we post hoc tested alternative criteria (50% or 66% activity change), again no significant group differences between controls and CFS patients emerged." Part of the rationale of many behavioural interventions in CFS patients is said to be to reduce "boom and bust" (sample reference,[6]). However, it may be the case that the frequency of this activity pattern in CFS has been exaggerated.

Tom Kindlon



[1] Friedberg F, Sohl S. Cognitive-behavior therapy in chronic fatigue syndrome: is improvement related to increased physical activity? J Clin Psychol. 2009 Feb 11.

[2] Vercoulen JH, Bazelmans E, Swanink CM, Fennis JF, Galama JM, Jongen PJ, Hommes O, Van der Meer JW, Bleijenberg G. Physical activity in chronic fatigue syndrome: assessment and its role in fatigue. J Psychiatr Res. 1997 Nov-Dec;31(6):661-73.

[3] Fulcher KY, White PD. Strength and physiological response to exercise in patients with chronic fatigue syndrome. J Neurol Neurosurg Psychiatry. 2000 September; 69(3): 302307.

[4] Smith WR, White PD, Buchwald D. A case control study of premorbid and currently reported physical activity levels in chronic fatigue syndrome. BMC Psychiatry. 2006 Nov 13;6:53.

[5] van der Werf SP, Prins JB, Vercoulen JH, van der Meer JW, Bleijenberg G. Identifying physical activity patterns in chronic fatigue syndrome using actigraphic assessment. J Psychosom Res. 2000 Nov;49(5):373-9.

[6] Deary V, Chalder T: Chapter 11, "Conceptualisation in Chronic Fatigue Syndrome" in Formulation and Treatment in Clinical Health Psychology Edited by Ana V. Nikcevic, Andrzej R. Kuczmierczyk, Michael Bruch

Competing interests

No Competing Interests
 

Dolphin

Senior Member
Messages
17,567
Successful treatment of fatigue in CFS should not be directed only at encouraging patients to increase activity levels but, in addition,
particular attention should be paid to the cognitive processes that lie at the root of their physical inactivity
: attributing complaints to a physical cause - and believeing that activity is harmful and leads to fatigue.
This came from a 1997 paper by Vercoulen (and Bleijenberg) that articulates the biopsychosocial view of CFS rather well. It seems very appropriate to PACE which draws on the same theory for its CBT model, yet doesn't deliver on the results given that it has delivered honed therapy for tackling exactly these 'flawed' cognitions.

Incidentally, this paper has interesting evidence that self-report questionnaires are not an accurate way of measuring patient activity levels. Full text (pdf)
Yes, the Vercoulen paper is often mentioned to reference the model.

People might be interested in this paper:
Song, S., & Jason, L.A. (2005). A population-based study of chronic fatigue syndrome (CFS) experienced in differing patient groups: An effort to replicate Vercoulen et al.’s model of CFS. Journal of Mental Health, 14, 277-289.Retrieved from http://www.cfids-cab.org/cfs-inform/Subgroups/song.jason05.pdf - free full text there

A population-based study of chronic fatigue syndrome
(CFS) experienced in differing patient groups: An effort to
replicate Vercoulen et al.’s model of CFS

Abstract

Background: Vercoulen et al.’s (1998) model characterizes patients with chronic fatigue syndrome
(CFS) as having insufficient motivation for physical activity or recovery, lacking an internal locus of
control, and maintaining a self-defeating preoccupation with symptoms. However, this model has only
been tested in a poorly specified group using a single comparison sample.

Aims: To investigate whether Vercoulen et al.’s model provides an adequate description of CFS in a
community-based sample.

Method: A community-based sample recruited through telephone interviewing (N= 28,763) produced
five groups (CFS, CF-psychiatrically-explained symptoms, CF-medically-unexplained symptoms, CFsubstance
misuse, and idiopathic CF). The data were analysed using path analysis with the
endogenous (dependent) variables, fatigue severity, physical activity, and impairment, were ratio-level
measurements and consisted of at least four values. The exogenous (independent) variables except for
causal attribution of fatigue were also ratio-level measurements.

Results: The current investigation found that the Vercoulen et al. model adequately represented chronic fatigue secondary to psychiatric conditions but not CFS.

Conclusions: This finding points to important differences between CFS and psychiatrically-explained
chronic fatigue which may have an impact on the development of therapy as well as explanatory models.
 

oceanblue

Guest
Messages
1,383
Location
UK
That finding was mentioning in this comment on the PACE Trial (6th para on):
http://www.biomedcentral.com/1471-2377/7/6/comments#333618
Ho hum, as ever you were there before me!
Thus, self-report questionnaires are no perfect parallel tests for the Actometer
I think they meant they were shite by comparison since correlations were only 0.39 at best. Though of course there is a separate debate about how accurately actometers themselves measure physical activity leves in free-range individuals, as opposed to in the lab (interesting paper here about the actometer used in the recent Newton study, which appears better than previous figures for actometers).
 
Back