Severe ME Day of Understanding and Remembrance: Aug. 8, 2017
Determined to paper the Internet with articles about ME, Jody Smith brings some additional focus to Severe Myalgic Encephalomyelitis Day of Understanding and Remembrance on Aug. 8, 2017 ...
Discuss the article on the Forums.

Leonard Jason's blog on the IOM criteria 3/4/2015

Discussion in 'Institute of Medicine (IOM) Government Contract' started by Nielk, Mar 4, 2015.

  1. Nielk

    Nielk

    Messages:
    6,877
    Likes:
    10,615
    http://blog.oup.com/2015/03/iom-chr...r&utm_medium=oupacademic&utm_campaign=oupblog

    continue here
     
    alex3619, CBS, Valentijn and 5 others like this.
  2. Roy S

    Roy S former DC ME/CFS lobbyist

    Messages:
    669
    Likes:
    1,405
    Illinois, USA
    "An alternative vision is still possible, if those in power are willing to bring all interested parties to the table, including international representatives, historians on the science of illness criteria, and social scientists adept at developing consensus. In a collaborative, open, interactive, and inclusive process, issues may be explored, committees may be charged with making recommendations, and key gatekeepers may work collaboratively and transparently to build a consensus for change. Involve all parties — the patients, scientists, clinicians, and government officials — in the decision-making process."
     
  3. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    The IOM’s effort to dislodge chronic fatigue syndrome - Jason 3/4/15

    Link - http://blog.oup.com/2015/03/iom-chr...r&utm_medium=oupacademic&utm_campaign=oupblog

    <snip>

    Didn't see this posted anywhere on PR. I am quite sure that Jason knows more about the importance of diagnostic criteria and he has worked with more patient data than the entire IOM put together.
     
    Valentijn, RL_sparky and Bob like this.
  4. Bob

    Bob

    Messages:
    9,844
    Likes:
    33,947
    England (south coast)
  5. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    Admittedly, I don't waste much time here and I did search for this article by it's title (The IOM’s effort to dislodge chronic fatigue syndrome) before posting my own thread but seriously, one comment? Damn!
     
  6. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    I am in substantial agreement with Jason's views. He emphasizes the need for empirically validating a new definition. Now while this has never been done initially, even for the CCC or ICC, a supposed evidence based review should do so as a matter of course.

    He also emphasizes stakeholder involvement. Everybody needs a seat at the table, and by table I don't mean the tiny one in the basement with the old broken chairs and a faulty video link to the main banquet hall.
     
    oceiv, mango and aimossy like this.
  7. user9876

    user9876 Senior Member

    Messages:
    2,584
    Likes:
    18,184
    I don't understand how you would go about validating a new definition when we have no idea of cause or how many different diseases and disease mechanisms are covered by ME. Basically what is a definition being validated against?
     
  8. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    Jason would be the perfect person to validate any new criteria. Community psychologists and epidemiologists create and validate symptom based surveys all the time. Agreed, a biomarker would help but even a biomarker would need to validated in this group of patients and that would be no small task.

    There are many types of validity. http://kestrelconsultants.com/reference_files/Validating_Questionnaires.pdf

    Validation of SEID Dx criteria would be far from perfect but without any knowledge of the performance characteristics over time, across groups (eg. sudden versus gradual onset), or across different age groups, the entire enterprise is nothing more than a stab in the dark with the potential of very significant and perhaps severe consequences for patients as well as for any ongoing efforts that would need to assume consistent cohorts across past and future research.

    Before she was appointed to the IOM committee, Nancy Klimas got it right when she said that the entire process needed to be put on hold for a year, at which time we would likely have far more information about the pathology underlying the disease.

    Now everybody is in a huge rush, scared to death that this our one last chance. My guess is that this was actually the CDC's (and the CAA/SCMI's) last chance to appear as though they had done anything of substance (before biomedical research was going to make this all irrelevant and I suspect that like Klimas, both those orgs knew it) when what they are doing is putting us all at serious risk with a new and unknown set of diagnostic criteria, diverting precious resources, and creating a false sense of panic in the patient population.
     
    Last edited: Mar 19, 2015
    RL_sparky and Wildcat like this.
  9. user9876

    user9876 Senior Member

    Messages:
    2,584
    Likes:
    18,184
    None of that makes any sense to me. You have a population with a given symptom set and a classification technique to try and sort that population into groups. It may be that symptom sets form discrete clusters in which case you can label the clusters but this could be erroneous. If symptom sets have significant overlaps you are stuffed and have no hope. It seems to follow from the structure of the problem.

    What you describe as validation is more about whether a questionnaire acts as a classifier in the same way as a clinician. But this may well just follow due to the structure of the design process.

    There are also massive problems with internal validation. At one level multiple equivalent questions should be asked because people aren't accurate in responses (that was basically what likert said). But at another level that rarely happens as it means too long a survey. Then you have issues of how to combine results from multiple questions which the medical profession don't seem to understand.

    So I don't see how any diagnostic system can really be validated with no idea of mechanism (whether its CDC/SEID/CCC/ICC/ or ACC). We could compare them but which is correct - we have no real way of judging.

    Risks for patients come in two ways:
    1) Patients with a different disease are misclassified. I would say this is mainly an issue of clinical decision making process rather than diagnostic criteria. Basically decision makers often repeat decisions based on what is currently in their mind rather than going through all the valid hypotheses and judging them based on the available evidence. The less time the worse the effects from what I remember of the psychological literature.
    2) Patients with a disease don't get treatment available or get inappropriate treatment. There are no real evidence based treatments for ME although I think there are ones worth trailing. I think the name SEID was derived as a signal to doctors against using exercise therapies that appear to do the most harm to ME patients. If for that reason alone I quite like the name.
     
    Cheshire and Valentijn like this.
  10. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    I agree that it is far from perfect (it does make sense) but the alternative is to do nothing (which is what has been proposed) and that's far worse.

    None of the present or proposed criteria will likely have any impact on my current Dx or care but I suspect that it may for others.

    The biggest risk, which you fail to address, is the impact on past and future research cohorts (I expect that what we'll have is investigators going back over databases and making a significant number of assumptions and calling that good - and we wonder why there's so little progress).
     
  11. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    Questionnaires and subjectively reported symptoms are quite similar. The process has multiple checks/simultaneously used approaches to minimize issues arising out of the process - as the link I posted states, you never rely on a single type of validity as each are prone to their own types of bias.

    The IOM can say all day long that SEID is an improvement because it isn't a diagnosis of exclusion but without a bio-marker, you have a diagnosis of exclusion and you have to be careful not to draw in patients outside of your target group.

    Actually, completed Dx criteria should have the smallest possible number of very specific questions, each with a known relationship to a particular symptom/component of the disease. You create the instrument from a massive number of questions, many asked in similar ways or designed to assess similar processes. Then you use analyses (typically via some sort of regression/principle components analyses) to determine which version of each component adds the greatest amount of diagnostic accuracy (typically in a "stepped" process). Along the way you may tweak the way questions are asked in order to assess the effect.

    As Jason stated in his blog post, one of the biggest problems with Fukuda (and the reason it pulls in so many primary depression patients) was the increase from four symptom groups (CCC) to to eight.

    Doing nothing is not an improvement over what Jason is asking for.

    Fine, call it whatever you want in order signal some important shift, just leave the Dx criteria alone until you know what you're doing with it.
     
    Last edited: Mar 19, 2015
  12. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    You start by validating versus existing case definitions. If for example the CCC and SEID create basically the same cohort then you know they are essentially equivalent. If there are major issues then this can be highlighted. This crosswise comparison has been done between definitions like Fukuda, Oxford and CCC.

    With more time and funding you can take an established cohort, like is being done with the CDC multisite study, or the Stanford cohort, etc., and see what happens. This can again then be compared to, for example, the CCC.

    Most of the arguments for and against SEID are without any investigation into the definition. Everyone could be off base for a whole variety of reasons.

    In any case this is a specialty of Jason's. He has data sets he uses for precisely this purpose.

    No definition will ever be perfect until we have reliable biomarkers.

    If we simply assert that no definition can be validated, and accept it at that, we have no basis whatsoever for rejecting the SEID definition, and all debate is over. The reality is that no definition can be perfectly validated, and the closest that will come is after we can use biomarkers.
     
    Last edited: Mar 19, 2015
  13. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    This is correct. However we have a choice of no information or some information. More is better.
     
    CBS likes this.
  14. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    I substantially agree, with the proviso that this is about direct risks. Indirect risks occur through misinformation to doctors as well as shifting biases in research and funding.

    On direct risks, this misses out false negatives. Its also possible that patients will fail to be properly diagnosed.

    False positives, false negatives, poor treatment choice, education bias, research bias, funding bias, these seem to sum up the risks. I say risks, there is no certainty either way.
     
  15. CBS

    CBS Senior Member

    Messages:
    1,502
    Likes:
    847
    Anyone who has participated in one of Jason's research has had the 'pleasure' of responding to large batteries of questions (most of which are the same, some I suspect change slightly) on numerous occasions over the course of months (if not years). That's how this is done correctly. It doesn't happen overnight, it requires funding and commitment.

    Jason should be appreciated and applauded for his stand on the side of scientific rigor and against the (considered and sincere yet still) whimsical creation of Dx criteria. He's our best resource in this area and to ignore him is misguided at best.
     
    Last edited: Mar 19, 2015
    mango likes this.
  16. user9876

    user9876 Senior Member

    Messages:
    2,584
    Likes:
    18,184
    I find it strange that when I read papers on questionnaire design the advice they give doesn't seem to be followed or even known about by those who design medical questionnaires. Things like question ordering are quite critical as is the language used so that questions are not framed to give a particular answer. Stats on questionnaire scores can be very dodgy depending on the scoring systems I think great care needs to be taken to understand the underlying assumptions which often invalidate the analysis being done.



    My point is all Dx criteria for ME are basically quite arbitrary.


    If I were to look at a diagnostic criteria the biggest thing I would do would be a usability experiment where I gave the criteria to a number of clinicians and tested their consistency at diagnosing the same thing. That is the same criteria but different clinicians and a range of patients. To me the most important question is when out in the field will it produce consistent results. If the criteria are complex or vague this may not happen.

    I'm not sure what a cross wise comparison really gives apart from understanding how much the subsets overlap. I don't know how you would interpret those cases that are a member of one but not another.

    Incidentally I think Oxford proved quite unreproducible if you look at the PACE papers. They have issues with filtering patients out who initially met oxford but on second review didn't. In their recovery paper they also have issues where people who are clearly ill no longer met the oxford criteria.
     
    Valentijn likes this.
  17. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    This is what the APA do for the DSM. It means that a diagnosis of conversion disorder or an equivalent is good if the clinicians agree. Even though the disease category may be entirely artificial. Such validation does have a place, but like all such studies has to be interpreted carefully.
     
  18. user9876

    user9876 Senior Member

    Messages:
    2,584
    Likes:
    18,184
    What it means is that a clinicians can use the diagnostic tool. It doesn't mean that the tool says anything valid. But the converse problem is that a diagnostic tool in the designing experts hands may produce good results but in anyone else's hands its random. This would be a very bad thing for a diagnostic tool.
     
  19. A.B.

    A.B. Senior Member

    Messages:
    3,750
    Likes:
    23,190
    Psychiatry demonstrates impressively how fast research progress is on diseases that are diagnosed based on vague, arbitrary, subjective criteria without any biomarkers.

    The SEID criteria need to be tested to see whether they capture the population that does have abnormal CPET results.
     
    Last edited: Mar 20, 2015
    Cheshire and Valentijn like this.
  20. alex3619

    alex3619 Senior Member

    Messages:
    12,528
    Likes:
    35,242
    Logan, Queensland, Australia
    I do not disagree. Its a risk.
     

See more popular forum discussions.

Share This Page