2003 - A randomised controlled trial of a psycho-educational intervention to aid reco

Esther12

Senior Member
Messages
13,774
Likes
28,359
I've now edited this first post to include some of the points made by others too.

http://www.kcl.ac.uk/content/1/c6/01/47/68/EBVRCT.pdf

edit: seems offline there, try here: https://docs.google.com/viewer?a=v&...7jV2DO&sig=AHIEtbQH6DoJAJZL7zayQ5eotg0DJoc_Kw

A randomised controlled trial of a psycho-educational
intervention to aid recovery in infectious mononucleosis

Bridget Candy, Trudie Chalder, Anthony J Cleare,
Simon Wessely, Matthew Hotopf*

I've seen this study often mentioned as evidence that it's how patients respond to their illness, rather than the infection itself which is most important in determining levels of long-term disability.

It's a bit of a rubbish design, with only the intervention group getting therapist time, but the control group getting a leaflet. I've seen these results being promoted as if they were really dramatic, but if you look at the differences, much of it could be explained by those whose fatigue improved in the 'intervention group' being more willing to fill in questionnaires at six months than the 'control group'. At 12 months, when both groups have more similar rates of return, the level of fatigue reported are pretty similar:

At 12 months, differences between
groups were more modest, and not statistically significant.
This partly reflects reduced statistical power due to incomplete
follow up. It also might reflect the natural history of
IM related fatigue.
Considering this was not well controlled, and at 12 months there was no statistical difference between the levels of fatigue reported between the two groups, I think it would be fair to laugh at anyone trying to present this study as really compelling evidence for anything.

They actually mentioned this problem in the paper:

This was a small study, and the estimate of treatment effect
was imprecise. The follow-up rates were acceptable, but there
were more incomplete data for the control group at 6 months.
Unequal follow-up rates may explain the more modest differences
between the intervention and control groups when
methods are used to take account of missing data. Those
who failed to complete questionnaires at follow up had fewer
symptoms at baseline, and dropped out of the trial.
I know Peter White strangely forgot to mention those problems when he discussed this study from around fourteen minutes in here: http://www.scivee.tv/node/6895

[Transcript] So let me just move on to an important question about prevention. Just one slide on this. This was a study done by a group of people at Kings. Where they just, a small study, looked at 69 patients with acute IM in a RCT comparing a brief rehabilitation of a nurse going in and giving them advice about getting back to normal activity in a safe and gradual way, compared with being given a leaflet about what mono does to you. And they found that those who had the brief intervention were half as likely, half as likely, a huge effect size, of having prolonged fatigue six months later.[/I]
White also cites this study in his presentation for the Gibson Parliamentary Group that was looking in to the research around ME/CFS, and reported concerns about the links between the insurance industry and researchers (White being a prominent example of this). I wonder if their report would have been harsher had they not been misled about the value of psychosocial interventions:



www.erythos.com/gibsonenquiry/Docs/White.ppt

Seems a bit misleading to claim "Educational intervention, based on graded return to activity, halved the incidence of prolonged fatigue" considering that there was no statistical difference between the two groups at twelve months.

Chalder also cites this paper in this presentation here: http://www.mental-health-forum.co.uk/assets/files/11.20 Trudie Chalder FINAL 169FORMAT.pdf



Strangely her graph does not include the data from 12 months in. Purple did a more complete graph:



More useful, but rather less impressive.

That last Chalder presentation was from 2012. That this trial with 36 people in the therapy group which found no statistical difference between the group receiving therapist time and the group who just got a leaflet, and it is still being used by them to sell their expertise a decade after it was completed is indicative of the quality of evidence they have to support their claims.

It is even possible for the control group to be viewed as a nocebo:

The control group received a standardised fact-sheet about
infectious mononucleosis, which gave no advice on rehabilitation
If they theorise that fear related to viral infection is a significant factor in CFS, they could have expected such a leaflet to have a negative affect (depending upon what exactly the leaflet said).

So many of their results just look like homeopathy to me - act nice to patients and get slightly better questionnaire results because i) those who are feeling better are more likely to feel grateful and so complete their forms and ii) people tend to try to be positive about those who they think have tried to help them.

And people wonder why patients don't trust White and Chalder to present the data from the PACE trial in a fair and reasonable manner.

PS: The more stuff I read from around the time I got ill, the more pissed I get at the poor evidence base for the advice I was given. If they were that ignorant as to what I should be doing, they should have just been honest about it and left me to do whatever I thought was best instead of incompetently managing the psychosocial setting of my illness and promoting 'positive' cognitions - bloody bastards.

PPS: In post #7 I post a link to a Chalder presentation where she has a graph for the data from this study, but has removed the data for after 6 months! It's so annoying to think that people are just going to trust her without looking things up.
 
Last edited:

Esther12

Senior Member
Messages
13,774
Likes
28,359
They do seem to have a lot of papers which show very little, but can be easily spun a certain way. I hate CFS.
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
This presentation from White isn't referenced, but the talk of an 'educational intervention' for post-IM fatigue makes me think that he could be referencing this study (which showed no statistical difference between groups at 12 months).

Inquiry by the Parliamentary
Group on scientific research into
M.E.


https://docs.google.com/viewer?a=v&q=cache:LXG8qjonVxMJ:www.erythos.com/gibsonenquiry/Docs/White.ppt cartesian dualism cfs&hl=en&gl=uk&pid=bl&srcid=ADGEESjsNormgA-egflbV8BrhzzOR3RD7LbtXd8Zrx6AdSx3YMOZnZL1ayQCmYkk_jOHPvFdLYtivdbSwJnmJh1gtR-6jxCJDwvOulkofaD5v_qwNgjVBDv-zEkFWScBCvRic3QRK6hd&sig=AHIEtbR_l1Q0ZHLXM8DsoCdPJoRFo6IQvA



Slide 10

Post-IM fatigue

70 % of GPs only advice is to rest
Inactivity most replicated predictor of
prolonged fatigue
Educational intervention, based on graded
return to activity, halved the incidence of
prolonged fatigue
I'd love to have the reference for this confirmed.

Edit: along with the Chalder presentation below, and more knowledge of the research around this time, I'm really pretty confident that this was White misrepresenting the evidence to MP's investigating the problems around CFS.
 

Enid

Senior Member
Messages
3,309
Likes
868
Location
UK
But this was 2003 when the psychos were desperately trying to hold onto their place in ME/CFS (it was then). Should we bothered about it now.
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
But this was 2003 when the psychos were desperately trying to hold onto their place in ME/CFS (it was then). Should we bothered about it now.
The timing of the paper fits in pretty well with the presentation.... and I think that all examples of results being spun are important, particularly if it was in order to mislead a parliamentary committee.

Also, I've only recently been reading CFS stuff, so have a lot to go back over.

And I'm more interested in the politics of CFS than the science, so this could be more interesting to me than many others.

I get your point though - it's not really 'latest research'.
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
Elsewhere I've seen this paper being cited as from 2004:

Candy, B. Chalder, T. Cleare, A. Wessely, S. Hotopf, M A randomised controlled trial of a psycho-educational intervention to aid recovery in infectious mononucleosis. Journal of Psychosomatic Research 2004 57; 89-94

And I think it must be the paper Chalder refers to in this presentation in slide 18 here:

http://www.mental-health-forum.co.uk/assets/files/11.20 Trudie Chalder FINAL 169FORMAT.pdf
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
She cuts the data past six months, when the differences found become insignificant! The graph is only half there! Anyone want to bet on whether she explained to the audience that the response rates differed dramatically between the therapy and control groups at six months?

Chlader is such a wretched spin-meister.
 

Purple

Bundle of purpliness
Messages
489
Likes
742
Is it possible to see the paper so that we know how the graph continued?
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
Hi Purple. Sorry, hadn't realised that the link in my first post has gone dead.

The paper is available here, but only includes the data, not a graph:

https://docs.google.com/viewer?a=v&...7jV2DO&sig=AHIEtbQH6DoJAJZL7zayQ5eotg0DJoc_Kw

Table 2 shows that at 12 months 8/25 patients in the intervention group who filled in their forms reported fatigue problems, and 10/24 in the control group did. A rather less impressive result than when you cut the data off at six months, when intervention had 9/34 and control had 14/26.

Add in the fact that the control group had no therapist time, and this tends to lead to worse subjective questionnaire scores even for worthless treatments like homeopathy, and this looks like evidence of how worthless their psycho-educational intervention was. Yet it keeps being cited in presentations as compelling evidence of their expertise, and the veracity of their theories... That presentation was from 2012, and it's still the best she has!
 

Valentijn

Senior Member
Messages
15,786
Likes
45,664
Elsewhere I've seen this paper being cited as from 2004:

Candy, B. Chalder, T. Cleare, A. Wessely, S. Hotopf, M A randomised controlled trial of a psycho-educational intervention to aid recovery in infectious mononucleosis. Journal of Psychosomatic Research 2004 57; 89-94
They produced multiple papers from the same study. Other papers from that study, such as http://www.simonwessely.com/Downloads/Publications/CFS/175.pdf are in 2005. So she probably got her dates mixed up on the slide, because I think you cited the proper one.
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
I wonder why Chalder failed to include the data from 12 months?

So many of their results just look like homeopathy to me - act nice to patients and get slightly better questionnaire results because i) those who are feeling better are more likely to feel grateful and so complete their forms and ii) people tend to try to be positive about those who they think have tried to help them.

That this trial with 36 people in the therapy group is still being used by them to sell their expertise a decade after it was completed is indicative of the quality of evidence they have to support their claims.
 
Last edited:

Valentijn

Senior Member
Messages
15,786
Likes
45,664
Purple sent me a graph of the data from Chalder's paper (thank you Purple):
The difference between the groups is much less impressive at 12 months, and additionally calls into questions whether the control and treated groups are at the beginning of trends that would result in them ending up at the same level.

You'd think by now the BPSers would know to avoid doing multiple followups of their victims :p
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
at the beginning of trends that would result in them ending up at the same level.
With such small numbers involved, it could just be random which group did better.

Thanks again to Purple for doing that graph. Having the graph with the full data just above the one that Chalder chose to use in her presentation does a really good job of illustrating the problems CFS patients have with these sorts of biopsychosocial researchers.
 

Esther12

Senior Member
Messages
13,774
Likes
28,359
The Chlader graph is pretty annoying. It looks like such a strong result when the data on returns is not provided (and of course the 12 month data). I can understand why people who do not check up on her think that she knows what she's doing. And White has been using this data to support his claims of expertise too.

And then people are baffled that CFS patients do not seem to trust the claims made about PACE.