• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Leonard Jason's blog on the IOM criteria 3/4/2015

Nielk

Senior Member
Messages
6,970
http://blog.oup.com/2015/03/iom-chr...r&utm_medium=oupacademic&utm_campaign=oupblog

The Institute of Medicine (IOM) recently released their report regarding a new name (i.e., systemic exertion intolerance disease) and case definition for chronic fatigue syndrome (CFS). In brief, the IOM proposed that at least four symptoms needed to be present to be included in this new case definition: substantial reductions or impairments in the ability to engage in pre-illness levels of occupational, educational, social or personal activities; post-exertional malaise; unrefreshing sleep; and at least one of the two following symptoms: cognitive impairment or orthostatic intolerance.

Melvin Ramsay, the distinguished British physician who helped create the first diagnostic criteria for myalgic encephalomyelitis (ME), also specified four domains within the original ME case definition. However, in 1988, the Centers for Disease Control renamed this illness CFS, and expanded the case definition’s domains to eight. In addition to patient discontent over the name change to CFS, the muddled case definition had a number of problems as it is well known that as unexplained somatic symptoms increase, there is a greater likelihood of identifying individuals who have psychiatric comorbidities.
continue here
 

Roy S

former DC ME/CFS lobbyist
Messages
1,376
Location
Illinois, USA
"An alternative vision is still possible, if those in power are willing to bring all interested parties to the table, including international representatives, historians on the science of illness criteria, and social scientists adept at developing consensus. In a collaborative, open, interactive, and inclusive process, issues may be explored, committees may be charged with making recommendations, and key gatekeepers may work collaboratively and transparently to build a consensus for change. Involve all parties — the patients, scientists, clinicians, and government officials — in the decision-making process."
 

CBS

Senior Member
Messages
1,522
The IOM’s effort to dislodge chronic fatigue syndrome - Jason 3/4/15

Link - http://blog.oup.com/2015/03/iom-chr...r&utm_medium=oupacademic&utm_campaign=oupblog

If there are ambiguities with case definitions, there will be difficulties in replicating findings across different labs, estimating the prevalence of the illness, consistently identifying biomarkers, and determining which treatments help patients. To develop or validate a reliable case definition, we need to provide operationally explicit criteria for a case definition, and develop a consensus within the scientific community on the case definition.

<snip>

However, regarding their proposed case definition, there is considerable evidence that cognitive impairment should belong to the group of cardinal and required symptoms. Orthostatic intolerance doesn’t evidence prevalence rates as high as the other proposed core symptoms, nor is there any clear justification of requiring patients to have either cognitive impairment or orthostatic intolerance. (I do not know of any factor analytic study that has separated the symptoms into this particular structure.) There may be different groupings of patients (different features, levels of severity) and this was not adequately addressed. Empirical methods could have been employed to test the proposed classification system. It is unclear why the committee members limited themselves to surveying extant studies rather than testing out their proposed model with an actual data set, particularly as they proposed a specific constellation of symptoms that had not been previously investigated. It was also a mistake to apply identical diagnostic criteria to both children and adults. Finally, there is a clear need to exclude those who have a primary affective psychiatric disorder, as including these patients in the case definition would confound the interpretation of epidemiologic and treatment studies, and complicate efforts to identify biological markers for this illness.

There are steps that could remedy some of the above issues.

<snip>

Didn't see this posted anywhere on PR. I am quite sure that Jason knows more about the importance of diagnostic criteria and he has worked with more patient data than the entire IOM put together.
 

CBS

Senior Member
Messages
1,522
Admittedly, I don't waste much time here and I did search for this article by it's title (The IOM’s effort to dislodge chronic fatigue syndrome) before posting my own thread but seriously, one comment? Damn!
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I am in substantial agreement with Jason's views. He emphasizes the need for empirically validating a new definition. Now while this has never been done initially, even for the CCC or ICC, a supposed evidence based review should do so as a matter of course.

He also emphasizes stakeholder involvement. Everybody needs a seat at the table, and by table I don't mean the tiny one in the basement with the old broken chairs and a faulty video link to the main banquet hall.
 

user9876

Senior Member
Messages
4,556
I am in substantial agreement with Jason's views. He emphasizes the need for empirically validating a new definition. Now while this has never been done initially, even for the CCC or ICC, a supposed evidence based review should do so as a matter of course.

He also emphasizes stakeholder involvement. Everybody needs a seat at the table, and by table I don't mean the tiny one in the basement with the old broken chairs and a faulty video link to the main banquet hall.

I don't understand how you would go about validating a new definition when we have no idea of cause or how many different diseases and disease mechanisms are covered by ME. Basically what is a definition being validated against?
 

CBS

Senior Member
Messages
1,522
I don't understand how you would go about validating a new definition when we have no idea of cause or how many different diseases and disease mechanisms are covered by ME. Basically what is a definition being validated against?

Jason would be the perfect person to validate any new criteria. Community psychologists and epidemiologists create and validate symptom based surveys all the time. Agreed, a biomarker would help but even a biomarker would need to validated in this group of patients and that would be no small task.

There are many types of validity. http://kestrelconsultants.com/reference_files/Validating_Questionnaires.pdf

“Validity” is not an absolute quality. It’s a continuum, with a questionnaire being valid to a certain degree in certain circumstances, and researchers must decide (preferably before the validation study is run) what degree of validity is considered sufficient. The above categories also suggest that there are types of validity that relate to the internal validity of the questionnaire (are similar questions answered similarly), others that relate to the ability of the questionnaire to determine a given state in a patient (e.g., that it varies in alignment with the severity of the condition), and still others that involve the
validity of comparing different groups on the basis of the questionnaire.

Each type of validity is distinct, meaning that a questionnaire can have one kind of validity but not another. Because of that, a questionnaire can never really be fully “validated.” It can only be validated for x patient population, under y conditions, and so forth. This implies that it may not be appropriate, for example, to use a lymphoma quality of life questionnaire in a melanoma study if the questionnaire hasn’t been validated for that particular population, unless it has been shown to be applicable to cancer patients generally.

Validation of SEID Dx criteria would be far from perfect but without any knowledge of the performance characteristics over time, across groups (eg. sudden versus gradual onset), or across different age groups, the entire enterprise is nothing more than a stab in the dark with the potential of very significant and perhaps severe consequences for patients as well as for any ongoing efforts that would need to assume consistent cohorts across past and future research.

Before she was appointed to the IOM committee, Nancy Klimas got it right when she said that the entire process needed to be put on hold for a year, at which time we would likely have far more information about the pathology underlying the disease.

Now everybody is in a huge rush, scared to death that this our one last chance. My guess is that this was actually the CDC's (and the CAA/SCMI's) last chance to appear as though they had done anything of substance (before biomedical research was going to make this all irrelevant and I suspect that like Klimas, both those orgs knew it) when what they are doing is putting us all at serious risk with a new and unknown set of diagnostic criteria, diverting precious resources, and creating a false sense of panic in the patient population.
 
Last edited:

user9876

Senior Member
Messages
4,556
Jason would be the perfect person to validate any new criteria. Community psychologists and epidemiologists create and validate symptom based surveys all the time. Agreed, a biomarker would help but even a biomarker would need to validated in this group of patients and that would be no small task.

There are many types of validity. http://kestrelconsultants.com/reference_files/Validating_Questionnaires.pdf



Validation of SEID Dx criteria would be far from perfect but without any knowledge of the performance characteristics over time, across groups (eg. sudden versus gradual onset), or across different age groups, the entire enterprise is nothing more than a stab in the dark with the potential of very significant and perhaps severe consequences for patients as well as for any ongoing efforts that would need to assume consistent cohorts across past and future research.

Before she was appointed to the IOM committee, Nancy Klimas got it right when she said that the entire process needed to be put on hold for a year, at which time we would likely have far more information about the pathology underlying the disease.

Now everybody is in a huge rush, scared to death that this our one last chance. My guess is that this was actually the CDC's (and the CAA/SCMI's) last chance to appear as though they had actually done something of substance (before biomedical research was going to make this all irrelevant and I suspect that like Klimas, both those orgs knew it) when what they are doing is putting us all at serious risk with a new and unknown set of diagnostic criteria, diverting precious resources, and creating a false sense of panic in the patient population.

None of that makes any sense to me. You have a population with a given symptom set and a classification technique to try and sort that population into groups. It may be that symptom sets form discrete clusters in which case you can label the clusters but this could be erroneous. If symptom sets have significant overlaps you are stuffed and have no hope. It seems to follow from the structure of the problem.

What you describe as validation is more about whether a questionnaire acts as a classifier in the same way as a clinician. But this may well just follow due to the structure of the design process.

There are also massive problems with internal validation. At one level multiple equivalent questions should be asked because people aren't accurate in responses (that was basically what likert said). But at another level that rarely happens as it means too long a survey. Then you have issues of how to combine results from multiple questions which the medical profession don't seem to understand.

So I don't see how any diagnostic system can really be validated with no idea of mechanism (whether its CDC/SEID/CCC/ICC/ or ACC). We could compare them but which is correct - we have no real way of judging.

Risks for patients come in two ways:
1) Patients with a different disease are misclassified. I would say this is mainly an issue of clinical decision making process rather than diagnostic criteria. Basically decision makers often repeat decisions based on what is currently in their mind rather than going through all the valid hypotheses and judging them based on the available evidence. The less time the worse the effects from what I remember of the psychological literature.
2) Patients with a disease don't get treatment available or get inappropriate treatment. There are no real evidence based treatments for ME although I think there are ones worth trailing. I think the name SEID was derived as a signal to doctors against using exercise therapies that appear to do the most harm to ME patients. If for that reason alone I quite like the name.
 

CBS

Senior Member
Messages
1,522
I agree that it is far from perfect (it does make sense) but the alternative is to do nothing (which is what has been proposed) and that's far worse.

None of the present or proposed criteria will likely have any impact on my current Dx or care but I suspect that it may for others.

The biggest risk, which you fail to address, is the impact on past and future research cohorts (I expect that what we'll have is investigators going back over databases and making a significant number of assumptions and calling that good - and we wonder why there's so little progress).
 

CBS

Senior Member
Messages
1,522
None of that makes any sense to me.

<snip>

What you describe as validation is more about whether a questionnaire acts as a classifier in the same way as a clinician. But this may well just follow due to the structure of the design process.

Questionnaires and subjectively reported symptoms are quite similar. The process has multiple checks/simultaneously used approaches to minimize issues arising out of the process - as the link I posted states, you never rely on a single type of validity as each are prone to their own types of bias.

The IOM can say all day long that SEID is an improvement because it isn't a diagnosis of exclusion but without a bio-marker, you have a diagnosis of exclusion and you have to be careful not to draw in patients outside of your target group.

There are also massive problems with internal validation. At one level multiple equivalent questions should be asked because people aren't accurate in responses (that was basically what likert said). But at another level that rarely happens as it means too long a survey. Then you have issues of how to combine results from multiple questions which the medical profession don't seem to understand.

Actually, completed Dx criteria should have the smallest possible number of very specific questions, each with a known relationship to a particular symptom/component of the disease. You create the instrument from a massive number of questions, many asked in similar ways or designed to assess similar processes. Then you use analyses (typically via some sort of regression/principle components analyses) to determine which version of each component adds the greatest amount of diagnostic accuracy (typically in a "stepped" process). Along the way you may tweak the way questions are asked in order to assess the effect.

As Jason stated in his blog post, one of the biggest problems with Fukuda (and the reason it pulls in so many primary depression patients) was the increase from four symptom groups (CCC) to to eight.

So I don't see how any diagnostic system can really be validated with no idea of mechanism (whether its CDC/SEID/CCC/ICC/ or ACC). We could compare them but which is correct - we have no real way of judging.

Doing nothing is not an improvement over what Jason is asking for.

Risks for patients come in two ways:
1) Patients with a different disease are misclassified. I would say this is mainly an issue of clinical decision making process rather than diagnostic criteria. Basically decision makers often repeat decisions based on what is currently in their mind rather than going through all the valid hypotheses and judging them based on the available evidence. The less time the worse the effects from what I remember of the psychological literature.
2) Patients with a disease don't get treatment available or get inappropriate treatment. There are no real evidence based treatments for ME although I think there are ones worth trailing. I think the name SEID was derived as a signal to doctors against using exercise therapies that appear to do the most harm to ME patients. If for that reason alone I quite like the name.

Fine, call it whatever you want in order signal some important shift, just leave the Dx criteria alone until you know what you're doing with it.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I don't understand how you would go about validating a new definition when we have no idea of cause or how many different diseases and disease mechanisms are covered by ME. Basically what is a definition being validated against?
You start by validating versus existing case definitions. If for example the CCC and SEID create basically the same cohort then you know they are essentially equivalent. If there are major issues then this can be highlighted. This crosswise comparison has been done between definitions like Fukuda, Oxford and CCC.

With more time and funding you can take an established cohort, like is being done with the CDC multisite study, or the Stanford cohort, etc., and see what happens. This can again then be compared to, for example, the CCC.

Most of the arguments for and against SEID are without any investigation into the definition. Everyone could be off base for a whole variety of reasons.

In any case this is a specialty of Jason's. He has data sets he uses for precisely this purpose.

No definition will ever be perfect until we have reliable biomarkers.

If we simply assert that no definition can be validated, and accept it at that, we have no basis whatsoever for rejecting the SEID definition, and all debate is over. The reality is that no definition can be perfectly validated, and the closest that will come is after we can use biomarkers.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Risks for patients come in two ways:
1) Patients with a different disease are misclassified. I would say this is mainly an issue of clinical decision making process rather than diagnostic criteria. Basically decision makers often repeat decisions based on what is currently in their mind rather than going through all the valid hypotheses and judging them based on the available evidence. The less time the worse the effects from what I remember of the psychological literature.
2) Patients with a disease don't get treatment available or get inappropriate treatment. There are no real evidence based treatments for ME although I think there are ones worth trailing. I think the name SEID was derived as a signal to doctors against using exercise therapies that appear to do the most harm to ME patients. If for that reason alone I quite like the name.

I substantially agree, with the proviso that this is about direct risks. Indirect risks occur through misinformation to doctors as well as shifting biases in research and funding.

On direct risks, this misses out false negatives. Its also possible that patients will fail to be properly diagnosed.

False positives, false negatives, poor treatment choice, education bias, research bias, funding bias, these seem to sum up the risks. I say risks, there is no certainty either way.
 

CBS

Senior Member
Messages
1,522
In any case this is a specialty of Jason's. He has data sets he uses for precisely this purpose.

Anyone who has participated in one of Jason's research has had the 'pleasure' of responding to large batteries of questions (most of which are the same, some I suspect change slightly) on numerous occasions over the course of months (if not years). That's how this is done correctly. It doesn't happen overnight, it requires funding and commitment.

Jason should be appreciated and applauded for his stand on the side of scientific rigor and against the (considered and sincere yet still) whimsical creation of Dx criteria. He's our best resource in this area and to ignore him is misguided at best.
 
Last edited:

user9876

Senior Member
Messages
4,556
Actually, completed Dx criteria should have the smallest possible number of very specific questions, each with a known relationship to a particular symptom/component of the disease. You create the instrument from a massive number of questions, many asked in similar ways or designed to assess similar processes. Then you use analyses (typically via some sort of regression/principle components analyses) to determine which version of each component adds the greatest amount of diagnostic accuracy (typically in a "stepped" process). Along the way you may tweak the way questions are asked in order to assess the effect.

I find it strange that when I read papers on questionnaire design the advice they give doesn't seem to be followed or even known about by those who design medical questionnaires. Things like question ordering are quite critical as is the language used so that questions are not framed to give a particular answer. Stats on questionnaire scores can be very dodgy depending on the scoring systems I think great care needs to be taken to understand the underlying assumptions which often invalidate the analysis being done.



Fine, call it whatever you want in order signal some important shift, just leave the Dx criteria alone until you know what you're doing with it.

My point is all Dx criteria for ME are basically quite arbitrary.


You start by validating versus existing case definitions. If for example the CCC and SEID create basically the same cohort then you know they are essentially equivalent. If there are major issues then this can be highlighted. This crosswise comparison has been done between definitions like Fukuda, Oxford and CCC.

If I were to look at a diagnostic criteria the biggest thing I would do would be a usability experiment where I gave the criteria to a number of clinicians and tested their consistency at diagnosing the same thing. That is the same criteria but different clinicians and a range of patients. To me the most important question is when out in the field will it produce consistent results. If the criteria are complex or vague this may not happen.

I'm not sure what a cross wise comparison really gives apart from understanding how much the subsets overlap. I don't know how you would interpret those cases that are a member of one but not another.

Incidentally I think Oxford proved quite unreproducible if you look at the PACE papers. They have issues with filtering patients out who initially met oxford but on second review didn't. In their recovery paper they also have issues where people who are clearly ill no longer met the oxford criteria.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
If I were to look at a diagnostic criteria the biggest thing I would do would be a usability experiment where I gave the criteria to a number of clinicians and tested their consistency at diagnosing the same thing.
This is what the APA do for the DSM. It means that a diagnosis of conversion disorder or an equivalent is good if the clinicians agree. Even though the disease category may be entirely artificial. Such validation does have a place, but like all such studies has to be interpreted carefully.
 

user9876

Senior Member
Messages
4,556
This is what the APA do for the DSM. It means that a diagnosis of conversion disorder or an equivalent is good if the clinicians agree. Even though the disease category may be entirely artificial. Such validation does have a place, but like all such studies has to be interpreted carefully.

What it means is that a clinicians can use the diagnostic tool. It doesn't mean that the tool says anything valid. But the converse problem is that a diagnostic tool in the designing experts hands may produce good results but in anyone else's hands its random. This would be a very bad thing for a diagnostic tool.
 

A.B.

Senior Member
Messages
3,780
Psychiatry demonstrates impressively how fast research progress is on diseases that are diagnosed based on vague, arbitrary, subjective criteria without any biomarkers.

The SEID criteria need to be tested to see whether they capture the population that does have abnormal CPET results.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
What it means is that a clinicians can use the diagnostic tool. It doesn't mean that the tool says anything valid. But the converse problem is that a diagnostic tool in the designing experts hands may produce good results but in anyone else's hands its random. This would be a very bad thing for a diagnostic tool.
I do not disagree. Its a risk.