• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

The P2P Draft Systematic Review Is Up

medfeb

Senior Member
Messages
491
Also, remember that the published response rates (i.e. the percentage of participants who responded to treatment and whose improvement met a minimal clinically useful threshold) (ignoring all the weaknesses of the trial) was between 11-15% after treatment with CBT/GET. So it's hardly a treatment to recommend for all ME patients, esp when so many harms have been reported for CBT/GET, and when the trial recruited patients with unexplained chronic fatigue. Also, severely affected patients were excluded from the trial.
Good points, Bob


The fact that participants were later subgrouped using Fukuda is problematic (i.e. the Fukuda subgroup was not necessarily representative of a Fukuda cohort) because participants were previously filtered using the Oxford recruitment criteria.
I agree. The approach to characterizing patients as Fukuda CFS is flawed for a couple of reasons. In addition to what you said, the 2013 Recovery paper also:
a) referenced the Reeves criteria, not Fukuda 1994
b) stated they only required the symptoms to be present for the previous week, not the previous month
c) is unclear if this was a clinical assessment by a doc or a paper exercise from a list of criteria
d) does not state that they applied the more restricted exclusion criteria of Fukuda
The assessment of ME by London criteria has its own set of problems

Regarding nuanced scientific points and strong advocacy…
To me, the strongest scientific and logic-based argument against the P2P evidence review is that they have treated all definitions as equivalent - as an equally valid representation of the same clinical entity constructed around the concept of medically unexplained chronic fatigue.

Then, because there is no diagnostic standard for this disease against which to compare diagnostic methods, they allow any of the "ME/CFS" definitions to stand as the standard against which they then compare the diagnostic method. Having excluded CPET itself from consideration, they state that there is no diagnostic standard but then they draw conclusions from treatment studies that used the same definitions - All while ignoring the differences between definitions.

Thoughts?
 

Dolphin

Senior Member
Messages
17,567
I agree. The approach to characterizing patients as Fukuda CFS is flawed for a couple of reasons. In addition to what you said, the 2013 Recovery paper also:
a) referenced the Reeves criteria, not Fukuda 1994
They referenced Reeves et al. (2003). They're different from Reeves et al (2005), the so-called empiric criteria.
Reeves et al. (2003) are not a huge difference from the Fukuda criteria.
 

medfeb

Senior Member
Messages
491
Thank you , User9876

Thus I conclude (differently to the PACE people but I think justified by what is and isn't published) that given a group of people with unexplained fatigue who meet the Oxford criteria CBT/GET/APT make little real difference.

I agree. The problem as we all know is that those consuming that study, like P2P, fails to understand the population differences - in addition to failing to appreciate that the treatment really made little difference, the quality issues, etc.

What I would hate is for us to say we only care about this group of people who are ill but that group whose illness is equally unexplained don't fit our pattern so lets ignore them (or it must be psychological for them). We need to be pushing for research that leads to understanding.

I couldn't agree more. At the same time, I think the approach of grouping them all together, is the wrong way to help anyone.

In a commentary on the Empirical study that increased prevalence 10-fold, Peter White said:
“Our current criteria for diagnosing CFS are arbitrary, and we need to widen the net to capture all those people who become so chronically tired and unwell that they can't live their lives to their full potential."

That's so crazy to me. Whether in research or clinically, I'd think that the approach of sticking all "chronically tired" people in the same clinical entity based on the ill-defined symptom of fatigue plus the state of our current medical knowledge is guaranteed to keep us from delivering proper medical care or ever learning anything about the range of illnesses that are encompassed by that scientifically questionable bucket.
 

medfeb

Senior Member
Messages
491
They referenced Reeves et al. (2003). They're different from Reeves et al (2005), the so-called empiric criteria.
Reeves et al. (2003) are not a huge difference from the Fukuda criteria.

Thanks, Dolphin,

My understanding was that Reeves 2003 reported on the establishment of the empirical criteria and the methods/instruments to be used by it while the Reeves 2005 paper was the first study to use those criteria. The 2005 paper states that the objective of the study was to implement recommendations from the group whose work led to the 2003 paper.

Have you heard that the methods used in the 2005 paper were changed from what the 2003 paper laid out? I'll also dig further
 

Valentijn

Senior Member
Messages
15,786
What I would hate is for us to say we only care about this group of people who are ill but that group whose illness is equally unexplained don't fit our pattern so lets ignore them (or it must be psychological for them).
Agreed ... every illness should be investigated. But two likely distinct illnesses shouldn't be grouped together in the first place, and then they certainly shouldn't have the results of the likely majority with one illness applied to a minority with a very different illness.

"Fatigue" research is not relevant to defining and treating ME. It's about as useful as grabbing 640 people off the street at random, and extrapolating the results to every person on the planet who has one of the same diseases as tho subjects.

If someone is not studying ME/CFS, but rather just fatigue, they should not be calling it ME or CFS. And if they're studying CDC CFS without PEM, then there needs to be a pretty plain statement that the results can not be extrapolated to ME patients. But what we get are nonsensical groupings where the results of CF patients are extrapolated to apply to CFS patients (because that what's Oxford calls CF patients), and of course ME doesn't really exist as far as the psychobabblers are concerned and is just the label for CFS used by people who hate psychology.

The research needs to be clear and honest, and needs clear and honest subgroupings if we're going to throw in a bunch of patients with very different symptoms into the same study. I had a rather nasty neurologist assure me that the "symptoms don't matter" when I argued that the PACE results weren't applicable to me because I don't fulfill the Oxford criteria. This sort of mixed grouping of illnesses creates exactly that sort of confusion, and it is extremely problematic when the advice can result in patients seriously harming themselves.

There needs to be rigorous research for ME patients, which is ONLY including ME patients. And there also needs to be research for other chronic fatigue patients who have similar symptoms to each other. There is NO need to combine those two groups, and it is impeding progress and harming patients when they do so.
 

user9876

Senior Member
Messages
4,556
If someone is not studying ME/CFS, but rather just fatigue, they should not be calling it ME or CFS. And if they're studying CDC CFS without PEM, then there needs to be a pretty plain statement that the results can not be extrapolated to ME patients. But what we get are nonsensical groupings where the results of CF patients are extrapolated to apply to CFS patients (because that what's Oxford calls CF patients), and of course ME doesn't really exist as far as the psychobabblers are concerned and is just the label for CFS used by people who hate psychology.

The research needs to be clear and honest, and needs clear and honest subgroupings if we're going to throw in a bunch of patients with very different symptoms into the same study. I had a rather nasty neurologist assure me that the "symptoms don't matter" when I argued that the PACE results weren't applicable to me because I don't fulfill the Oxford criteria. This sort of mixed grouping of illnesses creates exactly that sort of confusion, and it is extremely problematic when the advice can result in patients seriously harming themselves.

There needs to be rigorous research for ME patients, which is ONLY including ME patients. And there also needs to be research for other chronic fatigue patients who have similar symptoms to each other. There is NO need to combine those two groups, and it is impeding progress and harming patients when they do so.

My point is that we until we understand mechanism we don't really know how to subgroup. It may be that PEM defines a subgroup but it may not be. As to who has similar and dissimilar symptoms that could depend on what is a primary or secondary effect of a given mechanism.

I don't think it is impeding process to look at a large set of patients and say what type of mechanisms could be hypothesized and hence what would we look for. But groupings may be reaction to B-cell depletion or say how microglia act rather than the predominant symptom that a patient describes. I worry that having arbitrary groupings based on clustering could lead to us missing stuff.

When you say "There is NO need to combine those two groups, and it is impeding progress and harming patients when they do so." I think what we should be saying is there is no need for trials like PACE that potentially harm patients (whatever group) and don't enhance our understanding of any potential mechanisms of any version of ME.

To me one of the reasons that the IoM effort on diagnostic guidelines is a complete waste of money is that we don't know understand how different MEs work. But I am hopeful we will be more enlightened over the next few years.
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
The approach to characterizing patients as Fukuda CFS is flawed for a couple of reasons. In addition to what you said, the 2013 Recovery paper also:
a) referenced the Reeves criteria, not Fukuda 1994
b) stated they only required the symptoms to be present for the previous week, not the previous month
c) is unclear if this was a clinical assessment by a doc or a paper exercise from a list of criteria
d) does not state that they applied the more restricted exclusion criteria of Fukuda
The assessment of ME by London criteria has its own set of problems
Indeed.

Except... Sorry, I made a mistake in my previous post... The participants were subgrouped using the Reeves criteria and the London ME criteria. I've just had a look at the paper, and Fukuda was not used in the PACE trial to subgroup. (I've amended my earlier post.)
As you say, there are well known problems with the Reeves criteria, and they've been disowned by the CDC.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Thanks, Dolphin,

My understanding was that Reeves 2003 reported on the establishment of the empirical criteria and the methods/instruments to be used by it while the Reeves 2005 paper was the first study to use those criteria. The 2005 paper states that the objective of the study was to implement recommendations from the group whose work led to the 2003 paper.

Have you heard that the methods used in the 2005 paper were changed from what the 2003 paper laid out? I'll also dig further
The Reeves et al. (2003) critieria are an update of the Fukuda criteria. I don't find it (the Reeves et al. 2003 criteria) particularly controversial. People like Lenny Jason were co-authors.

The Reeves et al. (2005) have been described by the CDC as an operationalisation of the Reeves et al. (2003) criteria or an operationalisation of the Fukuda et al. (1994) criteria. Bill Reeves did some very odd things in the particular questionnaires he used and the thresholds he picked - I don't think they should be blamed on the Reeves et al. (2003) criteria in particular.

The Reeves et al. (2003) criteria and Reeves et al. (2005) are quite different. I don't think it is helpful to combine them as one. Bill Reeves' influence is much more on the Reeves et al. (2005) criteria than the Reeves et al. (2003) criteria and I think it is better to use the "Reeves criteria" for the Reeves et al. (2005) criteria or at least specify what one means.
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
To me, the strongest scientific and logic-based argument against the P2P evidence review is that they have treated all definitions as equivalent - as an equally valid representation of the same clinical entity constructed around the concept of medically unexplained chronic fatigue.
I agree. The CFS and ME definitions have come about for good reason, after many years of observation of patients. The various criteria might not be perfect, but they have been used extensively in research and to define patient cohorts in many countries. Apart from the Oxford criteria, all the CFS and ME criteria define more than chronic fatigue. Even the CDC's Fukuda criteria does more than define simple chronic fatigue. At the very least, the P2P process needs to decide whether it is investigating unexplained chronic fatigue or CFS, and to be clear about what it is referring to, and to not conflate CFS with UCF. If they insist on including UCF, then the UCF research outcomes should be carefully distinguished from CFS research.
 
Last edited:

user9876

Senior Member
Messages
4,556
That's so crazy to me. Whether in research or clinically, I'd think that the approach of sticking all "chronically tired" people in the same clinical entity based on the ill-defined symptom of fatigue plus the state of our current medical knowledge is guaranteed to keep us from delivering proper medical care or ever learning anything about the range of illnesses that are encompassed by that scientifically questionable bucket.

I guess I have a belief that we need to be pushing to define the different buckets based on a better understanding of the biology rather than clustering of symptoms. We may get surprises.

In terms of attacking PACE I think the methodology is so poor and their reporting of results is so poor this is the argument to use. How can anyone take a trial's results seriously when they define recovery using thresholds below or at the entry criteria especially when they are so wound up about people asking for the data, measures defined in the protocol or even whether the trial steering committee approved their recovery definition.

I think to allow the suggestion that PACE worked on a subset of people is quite dangerous since it allows for the continuation of the work and for doctors to push any patient (whether in some subgroup or not) into GET which could harm them. Reading between the lines of the PACE results they didn't help anyone so we should say that and not give them wriggle room.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
The Reeves et al. (2003) critieria are an update of the Fukuda criteria. I don't find it (the Reeves et al. 2003 criteria) particularly controversial. People like Lenny Jason were co-authors.

The Reeves et al. (2005) have been described by the CDC as an operationalisation of the Reeves et al. (2003) criteria or an operationalisation of the Fukuda et al. (1994) criteria. Bill Reeves did some very odd things in the particular questionnaires he used and the thresholds he picked - I don't think they should be blamed on the Reeves et al. (2003) criteria in particular.

The Reeves et al. (2003) criteria and Reeves et al. (2005) are quite different. I don't think it is helpful to combine them as one. Bill Reeves influence is much more on the Reeves et al. (2005) criteria than the Reeves et al. (2003) criteria and I think it is better to use the "Reeves criteria" for the Reeves et al. (2005) criteria.
That's interesting, to read about the distinction between the two. Thanks Dolphin.

Even so, Reeves 2003 has generally been lumped together with Reeves 2005 in people's minds, and neither Reeves criteria (2003 or 2005) are in general use. So perhaps the argument that they are not widely used, and are widely considered redundant, can be used for both?

I have always wondered why the PACE trial references Reeves 2003 and not 2005.
 

Dolphin

Senior Member
Messages
17,567
Dolphin said:
The Reeves et al. (2003) critieria are an update of the Fukuda criteria. I don't find it (the Reeves et al. 2003 criteria) particularly controversial. People like Lenny Jason were co-authors.

The Reeves et al. (2005) have been described by the CDC as an operationalisation of the Reeves et al. (2003) criteria or an operationalisation of the Fukuda et al. (1994) criteria. Bill Reeves did some very odd things in the particular questionnaires he used and the thresholds he picked - I don't think they should be blamed on the Reeves et al. (2003) criteria in particular.

The Reeves et al. (2003) criteria and Reeves et al. (2005) are quite different. I don't think it is helpful to combine them as one. Bill Reeves influence is much more on the Reeves et al. (2005) criteria than the Reeves et al. (2003) criteria and I think it is better to use the "Reeves criteria" for the Reeves et al. (2005) criteria.

That's interesting, to read about the distinction between the two. Thanks Dolphin.

Even so, Reeves 2003 has generally been lumped together with Reeves 2005 in people's minds, and neither Reeves criteria (2003 or 2005) are in general use. So perhaps the argument that they are not widely used, and are widely considered redundant, can be used for both?

I have always wondered why the PACE trial references Reeves 2003 and not 2005.
I don't think they should be lumped together in people's minds. The specific criticisms of the Reeves et al. (2005) criteria, which are terrible, don't generally apply to the Reeves et al. (2003) criteria.

The Reeves et al. (2005) have only been used by the CDC (and Lenny Jason when he was criticising them).

The Reeves et al. (2003) criteria have been used by other researchers. For example, I saw Jonathan Kerr use them. One can see the papers that reference them here: http://scholar.google.com/scholar?cites=3088783591160031520&as_sdt=2005&sciodt=0,5&hl=en. There's 287 of them.

I don't think it's a strong point to say that the Reeves et al. (2003) criteria have not been widely used as they have been used a bit by others and are not that different from the Fukuda criteria.
 

Dolphin

Senior Member
Messages
17,567
Sample of problem with Reeves et al. (2005):
People can satisfy it if they score low in one of four of the 8 SF-36 subscales. One of these is the Role emotional subscale. So, for example, Leonard Jason talked about somebody with major depressive disorder scoring 100/100 on the SF-36 physical functioning (which means they are not limited at all with regard to strenuous activities including sports) but they could qualify as being impaired as they say they have difficulties due to emotional reasons.
 

medfeb

Senior Member
Messages
491
Thanks for all this, everyone. Much appreciated.

There needs to be rigorous research for ME patients, which is ONLY including ME patients. And there also needs to be research for other chronic fatigue patients who have similar symptoms to each other. There is NO need to combine those two groups, and it is impeding progress and harming patients when they do so.

It seems to me that in the P2P response, we can and should attack PACE on the specific quality issues highlighted above while also calling out the points that Valentjn made about the muddling of the definitions in general and the impact of that on research and also clinical care.

Can I ask another question? - is there a simple story with the London criteria that were used in PACE? I've tried to figure that one out and always end up in a deep dark hole. Not trying to stir up a rats nest but if there were some simple explanation, I'd appreciate it.
 

Dolphin

Senior Member
Messages
17,567
Can I ask another question? - is there a simple story with the London criteria that were used in PACE? I've tried to figure that one out and always end up in a deep dark hole. Not trying to stir up a rats nest but if there were some simple explanation, I'd appreciate it.
Ellen Goudsmit said they didn't use the correct version. They used a shortened version from the Task Force Report (1994).

I don't think it should be seen as a good definition of M.E. at least in terms of how it was used in the PACE Trial.

One couldn't satisfy the ME criteria in the trial if one had a psychiatric disorder. 300 of the 640 participants were adjudged to have a psychiatric disorder. 329 people satisfied the ME criteria. That means that 97% (=329/340), virtually everyone, of those who satisfied the Oxford criteria who didn't have a psychiatric disorder were adjudged to have M.E.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I suspect we are focusing too much on diagnostic criteria. The results of the PACE trial, and indeed all CBT/GET studies are poor, even if they had very biased cohorts (and Oxford is the most heterogeneous definition, essentially just idiopathic chronic fatigue of long duration). That deserves more attention. Issues with ranges, non-publication of data, pathetic objective measures showing bad results, inappropriate use of statistics (total noob error) and so on. USE the PACE data to show pathetic results, and then ask how this is acceptable. I really object to a "normal" threshold that would have most 80 year olds normal.
 

Dolphin

Senior Member
Messages
17,567
I suspect we are focusing too much on diagnostic criteria. The results of the PACE trial, and indeed all CBT/GET studies are poor, even if they had very biased cohorts (and Oxford is the most heterogeneous definition, essentially just idiopathic chronic fatigue of long duration). That deserves more attention. Issues with ranges, non-publication of data, pathetic objective measures showing bad results, inappropriate use of statistics (total noob error) and so on. USE the PACE data to show pathetic results, and then ask how this is acceptable. I really object to a "normal" threshold that would have most 80 year olds normal.
Yes, tend to agree: the results in the PACE trial for objective measures were so poor that the criteria point is not so important. Also, other crazy things can be pointed to e.g. their definition of normal, recovery, etc.
 

medfeb

Senior Member
Messages
491
Yes, tend to agree: the results in the PACE trial for objective measures were so poor that the criteria point is not so important. Also, other crazy things can be pointed to e.g. their definition of normal, recovery, etc.

Yes, I do agree that the poor quality and lack of effect are paramount issues for PACE specifically and in general for the CBT/GET studies and should be front and center. But in thinking about the overall response to P2P, I also think that the failure to ask whether all the definitions represent the same cohort of patients is a serious flaw that needs to be highlighted as well.

For Question 1 on diagnostic methods, the inclusion criteria are "symptomatic adults (aged 18 years or older) with fatigue". Because there is no diagnostic gold standard, the review allows each definition to stand as its own standard against which they they evaluate the various diagnostic methods and then draw conclusions across definitions. (As an aside, CPET is mentioned but as a challenge, not a diagnostic method; the report draws conclusions about SF-36 and MFI-20, not about CPET. Snell's "Discriminative" study was excluded as were a number of orthostatic and immunological marker studies)

Deep in the details section, the report does acknowledge the diagnostic mess and the impact of overly broad definitions, stating:
We elected to include trials using any pre- defined case definition but recognize that some of the earlier criteria, in particular the Oxford (Sharpe, 1991) criteria, could include patients with 6 months of unexplained fatigue and no other features of ME/CFS. This has the potential of inappropriately including patients that would not otherwise be diagnosed with ME/CFS and may provide misleading results.

But then the analysis assesses treatment methods in studies that use those problematic definitions and diagnostic methods. I think that's an important wedge to be leveraged, both overall and especially when the report then goes on to rank Oxford studies like PACE as good.

On a separate note... one thing I didn't expect was for the report to call out the Brurberg prevalence review and its recommendation that Fukuda should be used. (ES-26, 74-75). Hard to tell where that will go but Fukuda has its own issues so its worth countering that discussion. Finally, I can't find it right now but I think I also remember that the report mentioned Brurberg's discussion about using treatment response as an alternative way to validate diagnostic method. I do not remember that the report specifically mention Brurberg's conclusion that PACE had demonstrated that Oxford, Fukuda(Reeves 2003) and London criteria patients had similar responses to treatment. But I think its worth proactively pointing out the flaws in using treatment response to demonstrate equivalency of cohorts in such a field.

Edits - I meant no diagnostic standard in the third paragraph
 
Last edited: