• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE raw data available

Justin30

Senior Member
Messages
1,065
These lies should not go without repricusion as this study influenced the world, harmed patients, created false hope, traumatized many, delayed research, stiffled funding, ruined lives, ruined families, lead people to experiment and throw away I bet in the billions of dollars, wasted millions of dollars, costed I bet in the trillions of dollars the economy, not just on PACE but all the other studies that used this info.

It is real just horiffic when you think about the modern day Farce they created about these therapies.

An aweful act of humanity when so much pointed to physical disease right from the get go.
 

Sam Carter

Guest
Messages
435
This is interesting, can anyone verify,
SF36 at 52w mean for those who completed the 52w walk = 55.17318
SF36 at 52w mean for those who did not complete the 52w walk = 46.59766

if true suggests those who have missing data for the walk follow-up were significantly less fit and so would likely have brought the mean down had they completed the walk.

also

Doctor global impression mean
(completed walk) 2.668142
(missed walk) 3.388158

Patient global impression mean
(completed walk) 2.947939
(missed walk) 3.622378

I get the same values @wdb.
 

Sam Carter

Guest
Messages
435
Notes on Chalder fatigue questionnaire scoring

(These figures need to be checked; I also plead guilty to utterly shameless self-plagiarism.)

Of the 177 participants who met the post-hoc recovery threshold for fatigue at week 52 (CFQ Likert <= 18), 45 had a CFQ bimodal score >= 6 making them fatigued enough to re-enter the PACE trial and 88 had a bimodal score >=4 which is the accepted definition of abnormal fatigue.

Therefore, if a person met the PACE trial post-hoc recovery threshold for fatigue at week 52 they had approximately a 50% chance of still having abnormal levels of fatigue and a 25% chance of being fatigued enough to enter the PACE trial. (These are very similar ratios to those found in the FINE trial.)

The bimodal score and Likert score of 22 participants moved in opposite directions between baseline and week 52 i.e. one scoring system showed improvement whilst the other showed deterioration.

A healthy person should have a Likert score of 11 out of 33, yet 48 participants recorded a Likert CFQ score of 10 or less at week 52 (i.e. they reported less fatigue than a healthy person) and 3 participants recorded a Likert CFQ score of 0, indicating confusion about the wording of the questionnaire.

Data from the PACE and FINE trials strongly suggest that it is not safe to use both bimodal and Likert scoring in the same trial, that the PACE trial post-hoc fatigue threshold is far too lax, and (probably) that Likert scoring should not be used at all.
 
Messages
2,158
Notes on Chalder fatigue questionnaire scoring

(These figures need to be checked; I also plead guilty to utterly shameless self-plagiarism.)

Of the 177 participants who met the post-hoc recovery threshold for fatigue at week 52 (CFQ Likert <= 18), 45 had a CFQ bimodal score >= 6 making them fatigued enough to re-enter the PACE trial and 88 had a bimodal score >=4 which is the accepted definition of abnormal fatigue.

Therefore, if a person met the PACE trial post-hoc recovery threshold for fatigue at week 52 they had approximately a 50% chance of still having abnormal levels of fatigue and a 25% chance of being fatigued enough to enter the PACE trial. (These are very similar ratios to those found in the FINE trial.)

The bimodal score and Likert score of 22 participants moved in opposite directions between baseline and week 52 i.e. one scoring system showed improvement whilst the other showed deterioration.

A healthy person should have a Likert score of 11 out of 33, yet 48 participants recorded a Likert CFQ score of 10 or less at week 52 (i.e. they reported less fatigue than a healthy person) and 3 participants recorded a Likert CFQ score of 0, indicating confusion about the wording of the questionnaire.

Data from the PACE and FINE trials strongly suggest that it is not safe to use both bimodal and Likert scoring in the same trial, that the PACE trial post-hoc fatigue threshold is far too lax, and (probably) that Likert scoring should not be used at all.

Hi Sam, I agree it's logical nonsense.

I'd go further and say it's not safe to use any scale designed by psychologists to measure anything to do with physical illness, and especially not this idiotic Chalder scale. It seem completely illogical in its design.

It's a list of different sentences describing aspects of fatigue, and the more you think apply to you, the higher score you get. So someone who is mildly fatigued but thinks all the descriptors apply at least a bit, can score the same on the bimodal scale or higher than someone with extreme fatigue who can barely move but doesn't think one or more of the descriptors make sense to them.

It doesn't in any way measure the degree of fatigue, and can therefore not register increases in fatigue. If you already score the maximum on each descriptor, it can't go higher, so it has a very strong ceiling effect.

Once I'd seen the Chalder fatigue scale, I completely lost any confidence that psychs know what they are doing in trying to measure anything! They just seem to make up ridiculous scales so they will have lots of data to analyse and can pretend what they do is science.
 

Daisymay

Senior Member
Messages
754
Hi Sam, I agree it's logical nonsense.

I'd go further and say it's not safe to use any scale designed by psychologists to measure anything to do with physical illness, and especially not this idiotic Chalder scale. It seem completely illogical in its design.

It's a list of different sentences describing aspects of fatigue, and the more you think apply to you, the higher score you get. So someone who is mildly fatigued but thinks all the descriptors apply at least a bit, can score the same on the bimodal scale or higher than someone with extreme fatigue who can barely move but doesn't think one or more of the descriptors make sense to them.

It doesn't in any way measure the degree of fatigue, and can therefore not register increases in fatigue. If you already score the maximum on each descriptor, it can't go higher, so it has a very strong ceiling effect.

Once I'd seen the Chalder fatigue scale, I completely lost any confidence that psychs know what they are doing in trying to measure anything! They just seem to make up ridiculous scales so they will have lots of data to analyse and can pretend what they do is science.

Well said.

How much of psychological and psychiatric research is done by questionnaires of one sort or another, I have no idea, does anyone know?

Would these sorts of problems not be prevalent in a great deal of that kind of "research"?
 

soti

Senior Member
Messages
109
Would these sorts of problems not be prevalent in a great deal of that kind of "research"?

oh yes... just ask Lenny Jason and his team who are spending a lot of effort trying to measure what such questionnaires are actually doing, instead of just assuming that they look reasonable so they must be effective!
 

A.B.

Senior Member
Messages
3,780
I have noticed something interesting. In the first 144 patients, walking test data is often absent. Then this pattern suddenly stops and there is only an occasional absence of walking test data.

Assuming the patients are in chronological order, this could be the result of changes to the entry criteria while the trial was under way.

Alternative explanation: patients are ordered by clinic, and treatment in one or more clinics was particularly detrimental.

Any other explanations?

This is something that should be clarified in further PACE investigations.

Malcom Hooper wrote this on changes to entry criteria:

The Investigators diluted the entry criteria after the PACE Trial had commenced by moving the
SF-36 (physical function score) goalposts and by including people who had previously undergone CBT/GET and had initially been rejected as PACE Trial participants. It cannot be denied that the PACE Trial Investigators changed the design of the Trial as they went along, which must surely undermine the reliability of all conclusions to be drawn from the data, not least because the first tranche of participants met different entry criteria from those who were recruited later. This can only mean that, because the entry criteria had been diluted, people in the second and subsequent tranches were less ill and are thus more likely to respond favourably to the interventions.
http://margaretwilliams.me/2011/hooper-reply-to-mrc-rawle_26jan2011.pdf

If they really included patients with prior CBT/GET exposure this is a huge source of bias by the way.
 

A.B.

Senior Member
Messages
3,780
Percent of first 144 patients missing walking test data at 52 weeks: 52.08
Percent of the remaining 496 patients missing walking test data at 52 weeks: 20.77

Could someone confirm this please?
 
Messages
2,158
I noticed this bunching of non walkers too. Since we have no information on the ordering of the data, we can only speculate about the reason, which while it might be 'fun' is ultimately pointless.
I can invent lots of reasons, from random variation to one centre being more relaxed about pushing patient to do it, or even discouraging them out of sympathy, etc etc. Or another centre being more insistent they must do it. We will never know.
 

A.B.

Senior Member
Messages
3,780
I noticed this bunching of non walkers too. Since we have no information on the ordering of the data, we can only speculate about the reason, which while it might be 'fun' is ultimately pointless.

It's quite possible that more PACE data will have to be released as the authors come under increasing pressure. We need to ask for information that would let us determine why patients are distributed like this.
 
Messages
2,087
It's quite possible that more PACE data will have to be released as the authors come under increasing pressure. We need to ask for information that would let us determine why patients are distributed like this.
Does anyone know the next steps in getting the rest of the data ?
 
Does anyone know the next steps in getting the rest of the data ?
FOI requests will be needed again, unless QMUL do the improbable and make all the data freely available, but, obviously, the process should be easier now that so many things that were used to dismiss requests have been legally proven not to be valid. (I have no legal training, this is just my laypersons understanding.)
 

A.B.

Senior Member
Messages
3,780
The PF-36 and Chalder Fatigue (Likert) at 0 weeks aren't different between the first 144 patients and the rest.

Which seems to suggest that the difference in outcome (whether they performed the walking test at 52 weeks or not) is not related to illness severity.

I have not looked at the other variables. I looked at PF-36 and Chalder Fatigue at 0 weeks because this data is always present.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Crashing atm so going to have to be vague but Unger was talking about standardizing and refining questionnaires for assessing ME in her Solve webinar, so at least the CDC have some level of appreciation of the issue.
Wasn't there talk of using PROMIS as a standardized measure of problems across many diseases?
 
Wasn't there talk of using PROMIS as a standardized measure of problems across many diseases?
Unger talked about using PROMIS in the MCAM ME studies;
Dr Unger said:
Comparing measurement tools
We have looked at how well measurement tools compared to each other and we've added an instrument called the PROMIS instrument, these instruments were supported by an NIH development to create and validate scales for use across a wide variety of chronic diseases and conditions in the general population, specifically to allow comparisons to be directly made, that means scores for ME/CFS patients can be compared to illnesses that are better understood by clinicians, and again it gives another window onto the illness for clinicians that are not familiar with this illness. So MCAM uses PROMIS measures for pain, fatigue and sleep and we found that there was a good correlation with other validated measures of these domains and it demonstrated illness severity as shown in the next table.

PROMIS T-Scores [Mean (SD)]
This shows the PROMIS T-Scores and T-Scores are 0 to 100. The top row gives the values for Fatigue, Sleep Disturbance, Sleep Related Impairment, Pain Interference and Pain Behavior related to for the MCAM study, and then the measures found in other studies of Chronic Pelvic Pain, Spinal Cord Injury, Muscular Dystrophy, Post-Polio Syndrome and Multiple Sclerosis are shown, and you can see that the scores for ME/CFS patients are the same or higher than those in other illnesses.
 
inst unger the alcohol and addiction researcher?
Not sure what she might have been involved in previously but currently
Unger webinar transcript said:
Greetings, this is Zaher Nahle from the Solve ME/CFS Initiative welcoming you to our webinar series. We have a special guest with us today, we have Dr. Elizabeth Unger MD PhD from the Centers for Disease Control and Prevention or the CDC. Dr. Unger will teach us about the ongoing research on ME/CFS at the CDC, Dr. Unger is currently the Chief of the Chronic Viral Diseases Branch in the Division of High Consequence Pathogens and Pathology at the CDC.