PACE Trial and PACE Trial Protocol

oceanblue

Guest
Messages
1,383
Location
UK
Clinical significance - The Minimum Important Difference

Single-anchor methods generally aim to establish differences
in score on the target instrument that constitute
trivial, small but important, moderate, and large changes in
QOL. However, they generally put great emphasis on a
threshold that demarcates trivial from small but important
differences: the minimum important difference (MID).
One popular definition of the MID is the smallest difference
in score in the domain of interest which patients
perceive as beneficial and which would mandate, in the
absence of troublesome side effects and excessive cost, a
change in the patients (health care) management.31

...Third, it emphasizes the primacy of
the patients perspective ...

Very interesting point about the usefulness of the 6-min walking test gain to patients.

It also suggests we should pay attention - in the absence of reliable objective data - to the patients self-rated Clinical Global Impressions ie better/same/worse after treatment. These gave 41% positive change (much better or very much better) for CBT & GET, vs 25% for the medical care control. That's a net gain of 16% for the 'active' therapies, with 6 out of 10 patients not helped at all (about 6% worse, slightly higher for control group).
 

Dolphin

Senior Member
Messages
17,567
A GET therapist who seemed more enthusiastic than an APT therapist

People can decide themselves how important or not this is. I'm clearing out my inbox (apologies for to the people I owe replies to) and want to post somewhere some bits and pieces I've picked up.

As I think I said, I'd be interested in any other "eye witness" accounts people have picked up anywhere. I think the GET program may have been like a pacing program for some (e.g. the participant who was counting housework 5/10 mins of housework as her exercise for the day); which could get good results: the people with just chronic fatigue or who had recovered from the EBV could exercise more; then have others who don't overdo it and on average one can get an increase in step counts, etc.

http://www.guardian.co.uk/society/2...e-treatment?commentpage=all#start-of-comments

othersideofvenus
18 February 2011 11:27AM

"I took part in this study, and was randomised to the GET group, and I'd be very sceptical about its results.

My initial blood tests showed some signs of infection and inflammation so I was sent for another set which apparently didn't, so I could be accepted into the trial. The assessment/criteria forms which had to be filled out at the before and during the trial, did not mention symptoms after exertion or delayed onset fatigue, there was very little attention paid to pain and cognitive/mental issues were very blurred.

At the start of the trial, I had to wear an accelerometor thing for a week, presumably to measure activity levels. But at the end of the trial, this wasn't repeated. The fitness tests measured the number of steps I could do in a set amount of time, but paid no attention to the fact that I usually couldn't walk for 2 days after these assessments.

The 'handbook' I was given contained an incredibly flawed model, which GET is based on, which basically goes 'felt a bit ill - led to resting too much - led to deconditioning - led to the ME/CFS symptoms'. This completely ignores the fact that the vast majority of people don't rest early on and carry on pushing themselves despite severe pain and fatigue.

I would suggest that the criteria were so vague and the assessment so poor that a majority of the people who recovered using GET never had ME/CFS in the first place."

othersideofvenus (again)
18 February 2011 12:42PM

"Another possible issue with the trial is the differing qualities of the therapists. The person I saw for GET was actually very positive about the benefits of it, and claimed to have seen enormous improvements in everyone he had treated. I can imagine him getting brilliant results with people suffering more from depression or other problems. I got to see someone for Pacing after the PACE trial year as there was a occupational therapist needing some 'practice' patients before she could work in the trial. While I personally found the pacing to be really helpful (although genuinely frustrating at first as I kind of had to reduce what I was doing for a while), the therapist was completely different, and far less inspiring.

There's also an enormous problem in misunderstanding these 3 therapies, by doctors too. GET doesn't mean pushing yourself further each day, as one of my doctors told me, leading to my first enormous crash. Neither is Pacing just accepting your limits and never pushing yourself at all. It's not prescribed laziness as some people seem to think!"
 

Dolphin

Senior Member
Messages
17,567
Cost of PACE Trial

Reference for UK5.0m cost

http://tinyurl.com/ydsv857
i.e.
http://www.rae.ac.uk/submissions/ra5a.aspx?id=176&type=hei&subid=3181
You are in: Submissions > Select unit of assessment > UOA 9 Psychiatry,
Neuroscience and Clinical Psychology > University of Edinburgh > RA5a
UOA 9 - Psychiatry, Neuroscience and Clinical Psychology
University of Edinburgh

[..]

"the PACE trial (7 UK centres) of chronic fatigue syndrome (CFS) treatments
(MRC; 5.0M);"

Here's another reference - doesn't give the full figure as one can see:

From figures below:
2,076,363 (MRC)
1,800,600 (DH)
702,975 (MRC)
250,000 (Chief Scientist Office) (Scotland)
------
4,829,938 + DWP money (unknown)

(Yes this is the same web page but it is a summary of a different entry)

http://tinyurl.com/ydsv857
i.e.
http://www.rae.ac.uk/submissions/ra5a.aspx?id=176&type=hei&subid=3181

You are in: Submissions > Select institution > Queen Mary, University of London > UOA 9 - Psychiatry, Neuroscience and Clinical Psychology > RA5a Queen Mary, University of LondonUOA 9 - Psychiatry, Neuroscience and Clinical Psychology
RA5a: Research environment and esteem

[..]

White showed that recovery from CFS is possible following CBT (Knoop et al, 2007). The MRC funded PACE trial, led by White , evaluates CBT, graded exercise, adaptive pacing and usual medical care in the treatment of CFS, and is over half-way completed (http://www.pacetrial.org/) (PACE trial MRC 04-09 2,076,363, DH Central Subvention 04-09 1,800,600; MRC PACE trial extension 09-10 702,975).
=========
SCOTTISH PARLIAMENT - WRITTEN ANSWER

2 December 2005

Health Department

Janis Hughes (Glasgow Rutherglen) (Lab): To ask the Scottish Executive what funding it has awarded for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) services or research since the CFS/ME short-life working group reported in 2002.

(S2W-20924)
Lewis Macdonald:

NHS Boards are given unified budgets, increased by an average of 7.6% in the current financial year, from which they are expected to meet the costs of services for people with CFS/ME and all other chronic conditions. It is for NHS Boards to decide how their unified budgets should be distributed, based on their assessments of local needs.

The Chief Scientist Office (CSO), within the Scottish Executive Health Department, has responsibility for encouraging and supporting research into health and health care needs in Scotland. CSO is currently contributing 250,000 to the Medical Research Council project 'Pacing, Activity and Cognitive behaviour therapy: a randomised Evaluation (PACE)' which compares different approaches to the clinical management of patients with CFS/ME.
 

Dolphin

Senior Member
Messages
17,567
I have been perplexed as how the London criteria patients could essentially score the same on Fatigue and Physical function as the all participants and the Reeves criteria patients in Figure 2 of the PACE study paper if they weeded out the primary depressive illness and anxiety disorder/neurosis patients. Well, they may not have. At the top of the London criteria form it states: Criteria 1 to 4 must be met for a diagnosis of ME to be made.


London criteria.jpg
Great find, Doogle.
 

Dolphin

Senior Member
Messages
17,567
They use this paper to justify changes to their primary measures: http://www.ncbi.nlm.nih.gov/pubmed/19455540

It prompted three replies and then a rejoinder. Maybe they cited a controversial statistics (!) paper in order to justify dubious maneauvers? Anyone here likely to have a good enough understanding of statistic to comment?
Well my knowledge of statistics is incomplete.

But I might be interested enough to actually read it and the correspondence if anyone can get it.

There might, for example, be a "killer argument" in the correspondence.

Measurement in clinical trials: a neglected issue for statisticians?

Stat Med. 2009 Nov 20;28(26):3189-209.

Senn S, Julious S.

Department of Statistics, University of Glasgow, Glasgow G12 9LL, U.K. stephen@stats.gla.ac.uk

Comment in:

Stat Med. 2009 Nov 20;28(26):3210-2; discussion 3223-5.
Stat Med. 2009 Nov 20;28(26):3213-4; discussion 3223-5.
Stat Med. 2009 Nov 20;28(26):3218-9; discussion 3223-5.
Stat Med. 2009 Nov 20;28(26):3215-7; discussion 3223-5.
Stat Med. 2009 Nov 20;28(26):3220-2; discussion 3223-5.

Abstract
Biostatisticians have frequently uncritically accepted the measurements provided by their medical colleagues engaged in clinical research. Such measures often involve considerable loss of information. Particularly, unfortunate is the widespread use of the so-called 'responder analysis', which may involve not only a loss of information through dichotomization, but also extravagant and unjustified causal inference regarding individual treatment effects at the patient level, and, increasingly, the use of the so-called number needed to treat scale of measurement. Other problems involve inefficient use of baseline measurements, the use of covariates measured after the start of treatment, the interpretation of titrations and composite response measures. Many of these bad practices are becoming enshrined in the regulatory guidance to the pharmaceutical industry. We consider the losses involved in inappropriate measures and suggest that statisticians should pay more attention to this aspect of their work.

PMID: 19455540 [PubMed - indexed for MEDLINE]
 

Dolphin

Senior Member
Messages
17,567
Change in outcome measures

(Pretty obvious in one way)
We know that the authors moved the goals posts from when they published the trial protocol. This can be criticised for being done to suit the results.

But another point about this is that when a trial is first approved, it has to go through peer-review to be accepted for funding by the MRC. Would those peer reviewers have been happy for example that a Physical Functioniong (questionnaire) score of 60 would be seen as acceptable for normal functioning, when the participants could have that score entering. Similarly for the fatigue criteria? And for lots of the other measures used? i.e. would it have been approved for taxpayers' money!? Maybe that's a different angle to bring up in advocacy for anybody so inclined.

I also think there is a reasonable chance that when it was approved by peer review for funding first that actometers were outcome measures.

i.e. at the start, would it have been considered a good use of 5m of taxpayers' money to get the data everyone has been presented with including with goalposts that were moved.
 

Dolphin

Senior Member
Messages
17,567
This is not really new, just a restatement of defects in rating scales found by oceanblue. You don't need any advanced mathematics at all to understand the potential problem. This is how I understand it.

The protocol has always stated that improvement should mean a 50% reduction in fatigue. The original scale went from 0 to 33 points. The scale used at the end of the study had four bins (0,1,2,3). Patients with fatigue ratings above 30 are frequently bedbound or housebound, thus unlikely to participate. A group mean of 28 suggests a large number of patients entering the study were in the range 25-30 on the first scale.

If a patient entered the study with a score of 28 and this dropped to 14, this would be counted as improvement on either scale, because 28->bin 3 and 14->bin 1, a drop of two points on the new scale. However, if I understand the relation between old and new scales correctly, a score of 15->bin 1 and 26->bin 3. This means patients who, on the old scale, dropped from 26, 27, 28 to 15, would now go from bin 3 to bin 1 on the new scale. None of these would have been counted as improvement before; all would count as improvement after the change.
I think you've picked up the scoring incorrectly.
With bimodal scoring, you can score 0 or 1 on any question, so the total possible scores for 11 questions are 0-11. For each question, there are four options, scored 0, 0, 1, 1.

With Likert scoring, you can score 0, 1, 2 or 3 on any question, so the total possible scores for 11 questions are 0-33. For each question, there are four options, scored 0, 1, 2, 3.

Another problem shows up with the scale for physical activity. Originally, the study required scores of 30-60 for entry. This was expanded to 30-65. Quantization of this scale is limited to multiples of 5, so the number of bins is much smaller than it looks. At present it looks like a patient could enter with a score of 65, drop to 60, and be counted as improved, which certainly seems wrong. There was no need to exclude patients whose activity score dropped to 25 (the first score below 30) because they would be unlikely to show up for meetings. Interventions to protect patients from "adverse outcomes" would be likely in this range.
I don't recall 30 ever being specificied as a boundary. We might have mentioned it in the discussion that it's unlikely people with lower scores would enter but I don't believe it's official.

Technically, if they dropped from 65 to 60 they wouldn't qualify in the improved category (requires an increase of 8 on the scale). However, the "functioning in normal range" category has been presented (out the paper) as a subset of this and they would fit that.

The major absurdity is that these are subjective ratings in a study where a major effort went into changing subjective values. Remembering or forgetting a single activity could change an activity score by 10, and we all know about problems with cognitive deficits. There are also plenty of labeled terms in the literature for bias introduced by either peers or authority figures. You could hardly exclude these from a study of the effectiveness of changing "illness beliefs".

Yes.
There are maybe two aspects to this:
(i) they believe participants have faulty illness beliefs and behaviours - why would one use their perceptions as the outcome measures. It's perhaps like having a trial of people with anorexia nervosa and simply asking the participants whether they think they are a healthy weight before and at the end but not taking any measures of their weight/BMI/similar
(ii) some of the treatments themselves may alter how patients may answer questions.
 

Dolphin

Senior Member
Messages
17,567
Here's the abstract from the first link above - part of the justification for moving the goalposts:

It does seem very technical and isn't obvious how it applies to the changing of the measures they used, but part of what I read there is an argument against using "inappropriate measures"...and maybe I'm reading too much into it but here's an argument I could paraphrase which may or may not be relevant...

So...if you do a study to examine some major effect, and get the right numbers of people to examine that effect - say, 640 people in 4 groups - then you've got your overall, large-scale study design, designed in advance to have the right statistical properties to give meaningful results (allegedly).

Then: suppose you collect more detailed data as well, along the way: lots of other dimensions of information. That more detailed data wasn't the core of the original experiment, so its statistical properties aren't part of the design. Then, after the study, you find correlations between variables that you weren't explicitly looking for. Perhaps within one of the 4 groups of 160 you find a further breakdown that suggests - say - that the people who responded well to the CBT were all also people who were on antidepressants at the start of the therapy. I guess such analysis would be a 'responder analysis'?

But analysing that correlation between the antidepressants and the success of the CBT wasn't part of the original design, and you didn't therefore design the numbers and everything else in order to make those results statistically valid. Then, if you find something that looks significant - like only the people on antidepressants got better from CBT - you might falsely think that was significant, due to misunderstanding of randomness and statistics.

Therefore, the "inappropriate measures" shouldn't be taken in the first place, otherwise you'll end up drawing unjustified conclusions.

That's a very rough paraphrase of the sense I get from the abstract...and that this argument has been used to remove measures that would be 'inappropriate'. And of course it's just another convenient coincidence that the hypotheses we would form to explain these results are precisely the details of responder analysis that must not be measured...
There's a phrase "post-hoc analysis" that would seem to describe what you're suggesting, which can be problematic as you astutely point out.
However, I'm not sure that's what is being discussed in the abstract referred to.
 

Dolphin

Senior Member
Messages
17,567
Clinical significance

The Minimum Important Difference

Single-anchor methods generally aim to establish differences
in score on the target instrument that constitute
trivial, small but important, moderate, and large changes in
QOL. However, they generally put great emphasis on a
threshold that demarcates trivial from small but important
differences: the minimum important difference (MID).
One popular definition of the MID is the smallest difference
in score in the domain of interest which patients
perceive as beneficial and which would mandate, in the
absence of troublesome side effects and excessive cost, a
change in the patients (health care) management.31

Several factors have made the concept of MID useful.
First, it ties the magnitude of change to treatment decisions
in clinical practice. Second, the smallest important difference
one wishes to detect helps with the study design and
choice of sample size; this definition also links to a crucial
decision in trial design. Third, it emphasizes the primacy of
the patients perspective and implicitly links that perspective
to that of the physician
. Since discussions of the ethics
of clinical care increasingly emphasize shared decision
making, this link is useful. Finally, the concept appears
easily understood by clinicians and investigators (although
there is little experience with patients).
A limitation of this definition of MID is that it does not
explicitly address deterioration. One way to address this
problem would be to modify the definition as follows: the
MID is the smallest difference in score in the domain of
interest that patients perceive as important, either beneficial
or harmful, and which would lead the clinician to
consider a change in the patients management.
An alternative to labeling a change as being of minimum
importance is to think of it as subjectively significant.
32 This latter term emphasizes that one can have an
important deterioration and an important improvement. It
also makes explicit that the meaningfulness of change over
time is based entirely on the patients self-assessment of
the magnitude of change. Thus, the term subjectively significant
is congruent with the concept that QOL is a subjective
construct and that the prime assessor of QOL status
and change in that status is not an observer, but the patient.
The last bit of this makes me think that if they're going to report who went up by 8 points on the SF-36 PF scale and 2 points on the Chalder Fatigue Scale (Likert scoring) that they should also give information on who went down by the smallest amount that would be considered significant, or simply by the same amount (8 and 2) rather than 20 points or more on the SF-36 PF subscale on two separate occasions.
 

oceanblue

Guest
Messages
1,383
Location
UK
The last bit of this makes me think that if they're going to report who went up by 8 points on the SF-36 PF scale and 2 points on the Chalder Fatigue Scale (Likert scoring) that they should also give information on who went down by the smallest amount that would be considered significant, or simply by the same amount (8 and 2) rather than 20 points or more on the SF-36 PF subscale on two separate occasions.

Excellent point! 'significantly worse' should be measured in exactly the same way as 'significantly better' - otherwise the authors are just cherry-picking what suits their arguments.
 

oceanblue

Guest
Messages
1,383
Location
UK
So...if you do a study to examine some major effect, and get the right numbers of people to examine that effect - say, 640 people in 4 groups - then you've got your overall, large-scale study design, designed in advance to have the right statistical properties to give meaningful results (allegedly).

Therefore, the "inappropriate measures" shouldn't be taken in the first place, otherwise you'll end up drawing unjustified conclusions.

I can see what you're saying, but I don't think that's actually the case.

You're right that the trial is designed so that it will give statistically significant results for the primary outcomes (at least the original ones, not the new ones they switched to!).

However, there are separate statistical tests that will be applied to each of the additional measures and it's implicity accepted that it may turn out the sample size isn't big enough for some of these measures to be significant - these measures will then have to be dropped. Also, they explicitly made the CBT/GET groups bigger than statistically required to help with significance of the additional measures.

It's also not quite the same as post-hoc testing, in that the researchers have specified in advance which measures/predictors they are looking at and how they will analyse them. mind you, they did that for the primary measures too, and still changed their minds afterwards.
 

oceanblue

Guest
Messages
1,383
Location
UK
But another point about this is that when a trial is first approved, it has to go through peer-review to be accepted for funding by the MRC. Would those peer reviewers have been happy for example that a Physical Functioniong (questionnaire) score of 60 would be seen as acceptable for normal functioning, when the participants could have that score entering. Similarly for the fatigue criteria? And for lots of the other measures used? i.e. would it have been approved for taxpayers' money!? Maybe that's a different angle to bring up in advocacy for anybody so inclined.

I also think there is a reasonable chance that when it was approved by peer review for funding first that actometers were outcome measures.

i.e. at the start, would it have been considered a good use of 5m of taxpayers' money to get the data everyone has been presented with including with goalposts that were moved.

Agreed, but perhaps just as important is the role of the Trial Steering Committee who are supposed to be independent of the trial and whose job it is to keep the trial honest. I'm sure they are supposed to approve any significant changes to protocol. I think this committee has important questions to answer
  1. which changes to protocol did it approve?
  2. for each change, what was the rationale for approving them?
  3. when did it approve them? Before data analysis started?
  4. why didn't it publish the approved changes and rationale for them?
Without this, it looks like the Trial authors have just been doing post-hoc surgery on the results, possibly with the blessing of the Trial Steering Committee.
 

oceanblue

Guest
Messages
1,383
Location
UK
"At the top of the London criteria form it states: Criteria 1 to 4 must be met for a diagnosis of ME to be made." Great find, Doogle.

To be fair, under point 5 it does say "nb this means if any depressive or anxiety disorder is present, the London criteria are not met".
 

anciendaze

Senior Member
Messages
1,841
mea culpa

I think you've picked up the scoring incorrectly.
Thank you for correcting that. I did indeed. At that time I did not have the searchable version of the documents, and lacked the stamina and memory to swallow the whole protocol plus questionnaires. I will modify my original post to warn readers.

I've mentioned the problem I have reading leaden prose where every word has to be checked for peculiar definition. The numbers reflect another problem. It is very hard to discern what, if anything, they mean without digging unusually deep, and I have a special distaste for meaningless numbers. The characteristics which bother me are typical signs of researchers who are trying to obscure results. In my case they succeeded in creating confusion about relating scores at the end of the trial to those at the beginning. Had there been detailed data to follow I would not have blundered so badly, but we don't have much in the way of examples to show how they treated individual scores.

I stand by my statements about natural bounds and intervention to avoid crossing bounds biasing results. It is possible the better results of GET were due to the greater concern for adverse outcomes.

A change in scoring between the beginning and end of the study is problematic in any case. The lack of objective data from actometers at the end of the study seems an unusually blatant example. It is true these were not named as essential to the protocol. The money, however, was spent, and patients inconvenienced, at the beginning. Criteria for measuring "improvement" were officially stated in terms of reduction of measures of fatigue and increased measures of activity. Once you realize the objective measures show patients in their 20s, 30s and 40s, moving like spavined octogenarians you have to wonder about those other marvelous numbers.

What strikes me is that the big gain for CBT appears to be entirely subjective. This makes sense when you understand that CBT concentrates on changing beliefs. By the only objective measure presented, CBT was a little less effective than specialist care without CBT. Please correct me if this is wrong.
 

Dolphin

Senior Member
Messages
17,567
A change in scoring between the beginning and end of the study is problematic in any case. The lack of objective data from actometers at the end of the study seems an unusually blatant example. It is true these were not named as essential to the protocol. The money, however, was spent, and patients inconvenienced, at the beginning. Criteria for measuring "improvement" were officially stated in terms of reduction of measures of fatigue and increased measures of activity. Once you realize the objective measures show patients in their 20s, 30s and 40s, moving like spavined octogenarians you have to wonder about those other marvelous numbers
I like that phrasing/word. You've a good way with words.

What strikes me is that the big gain for CBT appears to be entirely subjective. This makes sense when you understand that CBT concentrates on changing beliefs. By the only objective measure presented, CBT was a little less effective than specialist care without CBT. Please correct me if this is wrong.
Yes, no statistical difference on the 6 minute walking test, with the SMC doing a tiny bit better (although it's so tiny, it's probably best not to say they did better I think unless one uses a qualifier to show there really wasn't much difference).
 

anciendaze

Senior Member
Messages
1,841
This is really part of the previous post I forgot. I'm posting it separately because of the delay.

The significance of the missing data from actometers goes beyond possible changing criteria. There is a real question of effort going into participation in the study being offset by reductions in activity elsewhere. You need something more than subjective measures based on suspect memory when beliefs are officially being manipulated as the primary purpose of therapy.

My blunder above illustrates a constant problem for people with illness dealing with psychologists. You can confidently move ahead, making serious mistakes which will be used as proof of illness, or you can carefully check for errors before saying anything. I will freely admit to having cognitive deficits. When I attempt to compensate for them, I run into other problems seized by psychologists: "lacks self-confidence", "obsessive concern over errors". If you play by those rules, you can't win. There seems to be far more concern for protecting therapist ego from patients than vice versa.

This does not mean there is a conspiracy aimed at control of the world, just that people in professions remain people. Organizations are vulnerable to precisely the sort of defects apparent in this case, even if they have nothing to do with medicine.

I am in the unusual position of having letters on file from three psychiatrists saying I am coherent, well-informed, rational and responsible, but pessimistic. They admit some of my pessimism has been justified by experience. They recommended me for unusual responsibilities where erring on the side of caution was desirable.
 

anciendaze

Senior Member
Messages
1,841
Once you realize the objective measures show patients in their 20s, 30s and 40s, moving like spavined octogenarians you have to wonder about those other marvelous numbers.
I like that phrasing/word. You've a good way with words...
Originally used by P.G. Wodehouse to describe his appearance on television. I only steal from the best.
 

Dolphin

Senior Member
Messages
17,567
Norwegian population data

Norwegian population data.
Just following up on point below: I requested the SDs for the four subgroups and in particular, "No disease/current health problem".
Unfortunately, this is the response:
Dear <name>. This work was part of my PhD-thesis and published several years ago. In order to respond to your request would take some time which I do not have at the moment.
Best regards
Jon Hvard Loge

Anyway, I still think people can use this data.

=========Below is a repeat from before=========

J Psychosom Res. 1998 Jul;45(1 Spec No):53-65.

Fatigue in the general Norwegian population: normative data and associations.
Loge JH, Ekeberg O, Kaasa S.

Department of Behavioural Sciences in Medicine, University of Oslo, Norway. j.h.loge@medisin.uio.no

Abstract
Population norms for interpretation of fatigue measurements have been lacking, and the sociodemographic associations of fatigue are poorly documented. A random sample of 3500 Norwegians, aged 19-80 years, was therefore investigated. A mailed questionnaire included the fatigue questionnaire (11 items) in which the sum score of the responses (each scored 0, 1, 2, 3) is designated as total fatigue (TF). Sixty-seven percent of those receiving the questionnaire responded. Women (TF mean=12.6) were more fatigued than men (TF mean=11.9), and 11.4% reported substantial fatigue lasting 6 months or longer. TF and age were weakly correlated (men: r=0.17; women: r=0.09). No firm associations between fatigue and social variables were found. Disabled and subjects reporting health problems were more fatigued than subjects at work or in good health. Fatigue is highly prevalent in somatic and psychiatric disorders, but is often neglected. This national representative sample provides age- and gender-specific norms that will allow for comparisons and interpretations of fatigue scores in future studies.PMID: 9720855 [PubMed - indexed for MEDLINE]
Somebody sent me the full text (actually more than one person did. :thumbsup: ).

The average for the whole group was 12.2 (SD: 4.0). So that would give a threshold of 16 not 18 if it was bluntly used.
However, the group included people over the age of 60, people off sick due to illness, etc.

If one breaks it down the figures are:
Health condition TFa
No disease/current health problem 11.2
Past or current disease 12.1
Current health problem 12.5
Disease and current health problem 14.2
(Unfortunately they don't give SDs - I plan to write).

Remember that a neutral value is 11 - that's if one answered “same as usual” to each of the 11 questions.

Explaination:
The questionnaire further included items about past or current diseases (five items)
and current health problems (nine items). Past or current diseases included hypertension, myocardial infarction,
heart failure, cancer, and diabetes. Current health problems included chronic allergy, low back
pain, visual impairments, chronic skin problems, chronic lung problems, deafness or hearing problems,
functional impairment in leg or arm and other chronic health problems. Based on the responses to these
14 items, the sample was divided into four (1: no disease or current health problem; 2: disease but no
current health problem; 3: no disease but current health problem; 4: disease and current health problem).

The highest scores were for the following group:
Disablement benefit
Women (Raw score) 14.6
(adjusted for age) 14.7

Men (Raw score) 15.7
Men (adjusted for age) 15.5

They use the same definition of caseness as the protocol paper (but not the final paper): >3 on the bimodal scale (a "neutral" score on this is 0). (They make an error in one place and say >4 but say >3 in two places which is the normal way).

They explain where the caseness definition came from:
The cut-off (4 or higher) is
based on a validation study in which the FQ was compared with the question on fatigue
in the Revised Clinical Interview Schedule (CIS-R) [20].
20. Chalder T, Berelowitz G, Pawlikowska T, Watts L, Wessely S, Wright D, Wallace EP. Development of a fatigue scale. J Psychosom Res 1993;37:147–153.
So Trudie Chalder, one of the principal investigators of the PACE Trial, decided in the final paper to not use the definition of caseness she herself had "validated" and which they had also set in the protocol paper.
 

Dolphin

Senior Member
Messages
17,567
Transcript of Lancet TV interview with Trudie Chalder

Lancet TV have an interview with Trudie Chalder at http://bit.ly/f29zEX i.e.
http://download.thelancet.com/flatc...-up-pages/popup_video_S0140673611600962a.html

A transcript by a family member of a pwME is below which they said could be circulated. They decided to leave in the "erm's"

----------

Ciaran Flannery: I'm here with Trudie Chalder from King's College London to discuss some of the implications of the PACE trial. Thanks for joining us Trudie. Now why has this area been so controversial in the past?



Chalder: I think it's probably been controversial for a number of reasons I think erm quite a few patients have had errm 'difficulties' in that they may have may gone to their GP for example and asked for some specific advice about what to do about their symptoms or they may have gone to see any health professional and been given the advice to take up exercise and of course just given that kind of bland advice without any encouragement or help or support over a period of time it can actually make people feel worse. But I think when graded exercise therapy and CBT are carried out in a very measured way with the support of a therapist the effect is quite different.



Ciaran Flannery: And how important is the opinion of patient groups and how confident are you that you can bring them on board with this?



Chalder: Erm I think that we've been working very closely with the patient organisations over a number of years, both in terms of this trial and in our clinic and erm my experience, my personal experience and I think the experience of people on the team, is that people erm in the patient organisations have an open mind and that they're willing to work with us in developing treatments they've been involved in erm the treatments that we offered in the context of this trial.



Ciaran Flannery: So what now, how easy it going to be now to roll out CBT and GET on a wider scale?



Chalder: I think there's definitely erm people around who are delivering these treatments already, there's a clinical group who get together to discuss new treatments that are coming out and they make sure that clinicians working in the field are in fact delivering those potentially effective interventions. So I think the will is there.



Ciaran Flannery: Well Trudie thanks very much for joining us.



Chalder: Thank you!
 
Back