Assessment of recovery status in chronic fatigue syndrome using normative data (incl. on PACE Trial)

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Abstract said:
A diagnosis of CFS excludes many chronic disabling illnesses present in the general population, and CFS cohorts can almost exclusively consist of people of working age; therefore, it is suggested that thresholds for recovery should not be based on population samples which include a significant proportion of sick, disabled or elderly individuals.

Wow, someone actually read the stuff we write and thought about it for at least 5 seconds!
 

Dolphin

Senior Member
Messages
17,567
(In case anyone missed it)
The journalist, David Tuller DrPH, has today posted a substantial piece on the PACE Trial:

TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study
http://www.virology.ws/2015/10/21/trial-by-error-i/

There's an introduction and summary at the start if you don't want to take on the whole thing.

It's being discussed in this PR thread:
http://forums.phoenixrising.me/inde...he-pace-chronic-fatigue-syndrome-study.40664/

ME Network have also posted their own summary piece:
http://www.meaction.net/2015/10/21/david-tuller-tears-apart-pace-trial/
 

anciendaze

Senior Member
Messages
1,841
Anyone remember when we first made an issue of the non-normal distribution of SF-36 physical function scores? (I've lost track of a great deal that went before, and this goes back before the change in forum software.) That problem was a straight mathematical question requiring no expertise in medicine or psychology. I recall providing links to other distributions with distinctly different properties far back.

There are a number of properties the authors simply assumed because they are convenient, and "everyone does this". The first is that the distribution is stable. In fact no one has ever addressed that question very seriously. If it is not stable the whole rationale for the study collapses.

Even if it is stable there is no guarantee the study cohort will exhibit the kinds of standard deviation expected and necessary to validate results. Standard variation is the square root of variance, and the major tasks of parametric statistics revolve around "analysis of variance" (ANOVA). This is almost always predicated on assumptions behind "normal" ("Gaussian") distributions. There are definitely others around. (I believe I mentioned Rayleigh distributions, just as one example.)

My purely mathematical suspicions were aroused when I tried to fit the SF-36 scores in a paper referenced by the PACE authors (Chalder?) into a combination of normal distributions. (You would have to be blind to think a single normal distribution was sufficient, an assumption those authors slipped by their peers.) My problem was that I couldn't assign a reliable standard deviation (or variance). One distribution forming a "fat tail" might even be uniform between cutoffs caused by the way they collected data. You could even say the standard deviation was infinite in that case.

There are distributions known to have no definite standard deviation. The analytical expressions for these can give infinite values, but any finite set of samples will always yield a number. A Cauchy distribution is mostly of academic interest, but quite a number of physical processes yield Lévy distributions. (Examples: Brownian motion, van der Waals profiles, scattering of light in turbid solutions or interstellar dust clouds.) You may apply parametric statistics to reduce normally-distributed instrumental errors associated with measuring such a process, but this won't tell you much of anything about the process itself.

If a distribution has the properties of a Lévy distribution, and the analytic standard deviation is undefined, all such actual numbers will be determined by the sampling cutoffs and the number of samples. This led to suspicions about selection effects leading to study cohorts much smaller than the number of patients referred for treatment, and questions about how bounds were determined. Ideally, investigation would have quickly identified the major means of manipulation of results. Reality was different, and it took me a very long time to believe that the study conclusions were fabricated out of whole cloth. Part of this was due to my own limitations, but some must have been the result of deliberate obfuscation.

My understanding of British English is incomplete, but I suspect you might say the MRC and the DWP paid for "a bespoke study design" with desired results, which the SMC then trumpeted. Any number of people and organizations should be shamed by this disgraceful episode.
 

Chrisb

Senior Member
Messages
1,051
My understanding of British English is incomplete, but I suspect you might say the MRC and the DWP paid for "a bespoke study design" with desired results, which the SMC then trumpeted. Any number of people and organizations should be shamed by this disgraceful episode.

To those familiar with the story of the "dodgy dossier" none of this will be a surprise. It is the way one expects government to be conducted.
 
Back