• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Long-term follow-up of multi-disciplinary outpatient treatment for CFS/ME (Derby, UK)

Valentijn

Senior Member
Messages
15,786
Here are the main results:
View attachment 10460
Very weird that the size of "sample" of patients dips so much at post-treatment, then rises even higher than the pre-treatment sample group at the follow-up.

There's a couple pretty obvious problems with that - they aren't showing results from the full group of patients in the study, and we know that at least some of the patients in at least one time point weren't part of the "sample" in all three time points. This seems pretty screwed up, since the entire group of patients in a study is expected to be the "sample" representing the patients with the same disease.

And this leads to another issue - did they cherry pick the members of the sample groups they used to get results at the various time points? This can be done randomly, but still still deliberately, by randomly selecting data from a different subset of patients until they authors get the collection of individual participant data which they want. So you run the random selection 1000 times, and eventually you get your ideal outcome, even if it doesn't reflect the standard outcome for participants.

Or is there another explanation for that weirdness?
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Nothing much doing - maybe they be worried?

The sample size at follow-up was 98 for a response rate of 32.6%.
Basically, this kills the study as with most data missing there's not much you can conclude. As @Valentijn said, there's something odd going on as they had even more data missing at post-treatment (66 responses, just 22% for fatigue/function).

In any case, the gains from baseline (no control group) are very similar to those seen in the PACE SMC control group, indicating that the therapy probably isn't doing a huge amount (acutally, even if all the gains are down to the therapy, they are hardly setting the world on fire).
Chalder Fatigue Scale
Baseline=25.2
Post-Assessment=19.7 [-5.5 vs baseline]
Followup=19.9 [-5.3]
(PACE SMC: Baseline 28.3 > 23.8 at 12 mo follow-up, [-4.5])

SF36 Physical Functioning
Baseline=43.85
Post-Assessment=52.06 [+8.3]
Follow-up=56.0 [+12.2]
(PACE SMC: Baseline 39.2 > 50.8 at 12 mo follow-up [+11.6])

Whereas the abstract says:
"Linear mixed modelling showed that fatigue, physical functioning, and depression significantly improved, although the improvement was reduced for fatigue, physical functioning, and pain at follow-up."
My guess is the linear model made the difference (rather than a change to the data) - but that suggests that these modest gains weren't actually maintained at follow-up.

The abstract said:
Conclusions: The targeted multi-disciplinary service appeared to be at least somewhat effective long-term
That's about the very best you can say, not the least, and I'd like to think that those who run the clinic are concerned their therapy doesn't appear to be delivering a whole load for patients. Though maybe with a 30% response rate they just assume they can conclude nothing at all, other than they need to collecrt better data.
 

Valentijn

Senior Member
Messages
15,786
Basically, this kills the study as with most data missing there's not much you can conclude.
I'm not sure the data is actually properly missing, but rather it sounds like it is just not being used. Using the word "sample" to describe the groups of patients makes it sound more deliberate. Samples are chosen, randomly or otherwise, and I'd expect different terminology to be used if they mean "these are the only responses we got for these time points".
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
I'm not sure the data is actually properly missing, but rather it sounds like it is just not being used. Using the word "sample" to describe the groups of patients makes it sound more deliberate. Samples are chosen, randomly or otherwise, and I'd expect different terminology to be used if they mean "these are the only responses we got for these time points".

The sample size at follow-up was 98 for a response rate of 32.6%.
The key phrase is 'response rate': don't think their use of sample is untoward here - they sent a questionnaire to 300 people and wound up with a sample of 98 responders. Unfortunately such a sample is unlikely to be representative of the whole group.

And this leads to another issue - did they cherry pick the members of the sample groups they used to get results at the various time points? ...

Or is there another explanation for that weirdness?
Looking at the results, in each case the highest figure is for follow-up - the responders to the questionnaire. Presumably they mailed everyone on a database. The figures for baseline are a bit lower ie pre-existing baseline data is missing for a few, and lower still for post-treatment, but it's pretty normal for a study to have more data missing at post-treatment than baseline. So it doesn't look to me like there's any cherry-picking going on.