• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Invest in ME London conference 2012

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
However, we won't get any biomarkers with the mixed cohorts.

This is only correct if using old technology. It would certainly make it easier to have well defined cohorts. However, with a well designed, well funded and well resourced study then its possible to take any group and start working on biomarkers. Its about using clustering methods to group the patients. Of course the better the cohort the easier this will be, but its not impossible. If the Oxford definition people wanted to, and funded a really good study (presuming they could design such a study) then they could find biomarkers. They just prefer not to, and have even dropped talking about the immune biomarkers they have identified in the past. Bye, Alex
 

floydguy

Senior Member
Messages
650
This is only correct if using old technology. It would certainly make it easier to have well defined cohorts. However, with a well designed, well funded and well resourced study then its possible to take any group and start working on biomarkers. Its about using clustering methods to group the patients. Of course the better the cohort the easier this will be, but its not impossible. If the Oxford definition people wanted to, and funded a really good study (presuming they could design such a study) then they could find biomarkers. They just prefer not to, and have even dropped talking about the immune biomarkers they have identified in the past. Bye, Alex

Er what's the difference between mixed cohorts and "grouping" using "clustering methods". is this just semantics? It seems like either way you have to sub-group at some point whether it's on the front-end or the back-end to match bio-markers with groups of people. Does anybody really think there is ever going to be one bio-marker for this heterogeneous population?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Er what's the difference between mixed cohorts and "grouping" using "clustering methods". is this just semantics? It seems like either way you have to sub-group at some point whether it's on the front-end or the back-end to match bio-markers with groups of people. Does anybody really think there is ever going to be one bio-marker for this heterogeneous population?

If a biomarker study is carried out, then cohorts can be created using just the biomarker data.
The biomarker data can separate a heterogeneous group into a number of homogeneous groups.
So, just by using biomarker data, it could be possible to separate CFS patients into different cohorts for researching different treatment approaches.
That's the theory anyway.
 
Messages
5,238
Location
Sofa, UK
This is only correct if using old technology. It would certainly make it easier to have well defined cohorts. However, with a well designed, well funded and well resourced study then its possible to take any group and start working on biomarkers. Its about using clustering methods to group the patients. Of course the better the cohort the easier this will be, but its not impossible. If the Oxford definition people wanted to, and funded a really good study (presuming they could design such a study) then they could find biomarkers. They just prefer not to, and have even dropped talking about the immune biomarkers they have identified in the past. Bye, Alex
I concur. Cluster analysis should be able to identify subsets of mixed cohorts, and there seems to have been some progress in this respect already. Baranuik's presentation suggested that his group has made significant progress in this respect, both based on questionnaire data and confirmed by spinal fluid analysis. Even with the PACE trial data, analysis by independent researchers should be able to help with subtyping and with determining who (if anyone) is helped by CBT and GET. That's why it's so important for detailed data to be collected, made publicly available, and subjected to cluster analysis with the latest data analysis techniques. The PACE authors collected a wealth of data, and they need to be compelled to comply with their legal obligations and the requirements of their research council funders, and release that data so that it can be analysed by other researchers. And presumably their fears over what that open analysis would reveal are the explanation of why they have failed to comply with those requirements.
 

floydguy

Senior Member
Messages
650
I concur. Cluster analysis should be able to identify subsets of mixed cohorts, and there seems to have been some progress in this respect already. Baranuik's presentation suggested that his group has made significant progress in this respect, both based on questionnaire data and confirmed by spinal fluid analysis. Even with the PACE trial data, analysis by independent researchers should be able to help with subtyping and with determining who (if anyone) is helped by CBT and GET. That's why it's so important for detailed data to be collected, made publicly available, and subjected to cluster analysis with the latest data analysis techniques. The PACE authors collected a wealth of data, and they need to be compelled to comply with their legal obligations and the requirements of their research council funders, and release that data so that it can be analysed by other researchers. And presumably their fears over what that open analysis would reveal are the explanation of why they have failed to comply with those requirements.

I thought we couldn't just go around poking people in spine :).
 
Messages
5,238
Location
Sofa, UK
I thought we couldn't just go around poking people in spine :).
I guess some researchers can. :)

Yes, the spinal fluid analysis isn't very practical as a test for all patients (although I have been told by somebody who had one done that the pain of the procedure is water off a duck's back compared to the day to day pain of moderate/severe ME). But as a means of subgrouping on a biomedical basis, it's very promising. Baraniuk suggested that he was seeing a consistency between the cluster analysis based on questionnaire data and the clustering of spinal protein signals. If further investigations clarify those subgroups and confirm them with this objective signal, it may be possible to define those subgroups accurately using only questionnaires, which would amount to the construction of accurate, biomedically-based definitions of distinct conditions within the ME/CFS patient population. This would mean criteria analagous to Fukuda, ICC, CCC, etc, which are biomedically validated. It will be fascinating to see, when such cluster analyses come to fruition, how accurate the existing definitions are in defining those different subgroups.
 

Ember

Senior Member
Messages
2,115
If new technology is powerful enough to compensate for poor research design, then why is Ian Lipkin so concerned about his cohorts? He and Dr. Hornig have been quoted concerning the Lipkin/Hornig study:
“The effort in ME/CFS is to try to find some biomarkers that will be likely to identify a set of pathways that are likely to involved. That will be an enormous gain for the field and of course the patient,” said Dr. Hornig. Biomarkers in ME/CFS can be used to create diagnostic laboratory tests as well as to determine therapy response and prognosis.

The key to maximizing the outcomes of these tests is the criteria of the patients selected, according to Dr. Lipkin. He said this will give the greatest possibility of finding objective measures for monitoring and measuring the disease. University of Miami researcher and physician Dr. Nancy Klimas, who has been involved in several clinical definitions of ME/CFS, is in charge of the cohort recruitment to draw 200 patients from five sites located throughout the U.S.

What we want to do is start with patients who have been characterized extensively using standardized criteria established by a group of widely respected clinical researchers, said Dr. Lipkin (http://trialx.com/curetalk/2011/11/...equencing-and-proteomics-to-hunt-cfs-viruses/).
Amy Dockser Marcus quotes Dr. Lipkin concerning the XMRV study :
As a starting point, everyone had to agree on how to define a CFS patient for the purposes of the study. The issue has been highly contentious and Lipkin says they tried to agree to criteria for patient selection that includes everyone's viewpoints.

The solution: the study will seek to enroll people who in addition to meeting criteria for two widely used, symptom-based definitions of CFS, showed signs of infection such as a sore throat or tender lymph nodes around the time they developed CFS. The thought is that if there is a viral link to CFS, its most likely to show up in those patients (http://blogs.wsj.com/health/2010/11/17/gearing-up-for-the-big-search-for-xmrv/).
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
If new technology is powerful enough to compensate for poor research design, then why is Ian Lipkin so concerned about his cohorts?

The quality and type of patient selection is essential in all studies.
But the type of research we have been discussing could, in theory, supersede the need for the current diagnostic criteria, by creating new bio-medically-based criteria.
For example, if we could do a blood test in order to diagnose patients for subgroups of 'CFS', or 'ME', then we wouldn't need the current diagnostic criteria, which are based on signs and symptoms.
All the biomarker studies are ultimately working towards a diagnosis based on a blood-test (or other similar tissue tests.)
 

SOC

Senior Member
Messages
7,849
I think it's the degrees we're talking about. My hypothesis is that most of us could be sub-grouped based on severity of symptoms/labs in a particular area. I am not sure we'll ever get to biomarkers if we don't do this.

For me it's immune. I think the other area that probably separates us is in regards to what viruses are present. In my case it's EBV, HHV-6, Enteroviruses, VZV, with VZV and the Enteroviruses being the most elevated.

More importantly for those with severe activity problems you don't want me in your studies because I will skew those results as well.

The contentiousness is that we are generally experiencing something different that might have a common etiology. Some people are more active than others; some have bad allergies (ie mold, food, etc.). We'll keep going round and round if we don't recognize that we are different and should be researched that way.

I see. The idea is to group according to symptom severity. I guess I was confused because I would fall into the "eclectic" category and haven't fully internalized that many patients are strongly in one symptom grouping. Although with symptomatic treatments affecting many symptoms it's hard to know (and I don't want to know) what my illness would look like without them.

The chronic infections that we are battling may group us, but I think it's more immune problems that are at the root and which chronic infection we have may be more related to exposure than anything specific to the patient.

I suppose the bottom line is -- we won't know if there's clear groupings that would suggest different illnesses unless we look for them :)
 

SOC

Senior Member
Messages
7,849
Amy Dockser Marcus quotes Dr. Lipkin concerning the XMRV study :
As a starting point, everyone had to agree on how to define a CFS patient for the purposes of the study. The issue has been highly contentious and Lipkin says they tried to agree to criteria for patient selection that includes everyone's viewpoints.

The solution: the study will seek to enroll people who in addition to meeting criteria for two widely used, symptom-based definitions of CFS, showed signs of infection such as a sore throat or tender lymph nodes around the time they developed CFS. The thought is that if there is a viral link to CFS, its most likely to show up in those patients (http://blogs.wsj.com/health/2010/11/17/gearing-up-for-the-big-search-for-xmrv/).
[my bolding]

I wonder what the bones of contention were. There must have been somebody hanging onto lesser criteria rather than using the ICC. The CDC folks, maybe?

I also wonder what the "widely used, symptom-based definitions of CFS" are? Just because they're widely used doesn't make them good definitions. It seems stupid to use a ridiculously broad definition like Reeves or Oxford. So they probably used Fukuda and.... what?
 

Ember

Senior Member
Messages
2,115
But the type of research we have been discussing could, in theory, supersede the need for the current diagnostic criteria, by creating new bio-medically-based criteria.
I think you missed my point, Bob. To find biomarkers, you need research, and the criteria are key to that research. “The key to maximizing the outcomes of these tests is the criteria of the patients selected, according to Dr. Lipkin.”
 

Sing

Senior Member
Messages
1,782
Location
New England
How about in terms of actual testing? Did your NKC function, TGF Beta1, MSH, VIP, TH2 oriented immune system all improve? Do you think they might not have improved and the other things became additions to the immune problems? I think in my case the neurological may have improved but I don't really know due to lack of consistent testing.
I never had careful, thorough workups and testing done. Medicine on the cheap where I live, and no experts in ME/CFS.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I think you missed my point, Bob. To find biomarkers, you need research, and the criteria are key to that research. “The key to maximizing the outcomes of these tests is the criteria of the patients selected, according to Dr. Lipkin.”

I understood what you meant Ember.
We were discussing other complex types of biomarker studies, such as proteomics, and genetics. (Not what Lipkin is doing.)
If you take a heterogeneous cohort of CFS patients, then it is possible that such a study might be able to divide the patients up into homogenous cohorts based on, for example, up-regulation of genes, or a type of protein abnormality.
However, I am talking about theory. In practise, it's best to use diagnostic criteria that are as selective as possible.
But even Lipkin could use his pathogen results to sub-divide his selection of patients.
 

floydguy

Senior Member
Messages
650
I never had careful, thorough workups and testing done. Medicine on the cheap where I live, and no experts in ME/CFS.

So you don't really know? That's the problem we have is that there really isn't much confirmation for so much that goes on with us.
 

Ember

Senior Member
Messages
2,115
We were discussing other complex types of biomarker studies, such as proteomics, and genetics. (Not what Lipkin is doing.)

Can you clarify further? The title of the article is "Dr. Ian Lipkin and Dr. Mady Hornig, Use Deep Sequencing and Proteomics to Hunt CFS Viruses." If Dr. Lipkin's proteomics uses new technology, and the criteria are key here, then why are they not key to the complex types of biomarker studies (such as proteomics and genetics) that you were discussing?
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Er what's the difference between mixed cohorts and "grouping" using "clustering methods". is this just semantics? It seems like either way you have to sub-group at some point whether it's on the front-end or the back-end to match bio-markers with groups of people. Does anybody really think there is ever going to be one bio-marker for this heterogeneous population?

Its not just semantics floydguy, subgrouping is indeed critical but my point is it can be done under any definition, its just harder under very heterogenous conditions - particularly if you have no idea of criteria for subgrouping to start with. I see it as an iterative process. By using less heterogenous groups the first few loops of the iteration can be bypassed, at least for that kind of patient. However all this requires funding and expertise. We don't have enough funding, enough expertise or a sufficiently wide expertise generally to really go down this path properly - what we are doing instead, for the most part, is focussing on existing potential biomarkers. Every now and again though a new study comes along that adds some more, like the spinal proteomic study.

Focussing on specific candidate biomarkers has the advantage that we can get more bang for the buck, but has the disadvantage that we might be missing out on a lot. Its all a muddle really, if we had multiple and sizeable research institutes over the world, well funded, this would be much easier.

There may well be biomarkers that are universal, I cannot rule that out. They will however never be diagnostic because they will not apply only to specific diseases. Such a marker might be confirmatory in conjunction with other markers, but it has no great use in diagnostics. It might however be used to assess recovery, presuming of course that such a marker exists.

Bye, Alex
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I think you missed my point, Bob. To find biomarkers, you need research, and the criteria are key to that research. “The key to maximizing the outcomes of these tests is the criteria of the patients selected, according to Dr. Lipkin.”
Hi Ember, its key only because its the easiest way to do it - faster and cheaper. Since funding is always an issue, that is a problem. However I do suspect that protein marker clustering is one solution that will appear in a few years. They have to map proteins to genes to symptoms etcetera and then validate them. They then have to make comparisons with other groups, such as MS, RA and depression. At that point it will become a robust set of markers.

Specific focussed studies are good reductionist science. They are also a good way to miss out on seredipitous discoveries. Who knows what they miss by doing this? Its a yin-yang thing - the more focussed you are the cheaper and easier things become, the less focussed you are the bigger the chance for accidental discovery of something that might otherwise be missed.

The big issues are funding and its cousin resources. We don't have enough institutes (and equipment), enough researchers, and enough funding. Its these that force us to be more selective in most cases.That is why Lipkin makes those kinds of comments. It increases cost and time and resource use to filter out noise from a mixed cohort.

Where broader subgrouping/clustering approaches will get their day is that such techniques are generalizable. The technology could come from cancer or MS or ebola research - and then we simply re-apply it to our favourite condition. This is about using computing power to substitute for thousands of hours of lab study. This kind of thing made a century long genome mapping into a decade. It used to take a very long time to map even a single gene or protein. Now we have machines that do the job so fast we can process very large numbers in a single day. The same thing could happen for proteomics, especially when the technology to interact with different databases is developed. I don't know when this will be, but it seems likely to happen at some point.

I am of course biased, given that I have an artificial intelligence background. ;)

Bye, Alex
 

Ember

Senior Member
Messages
2,115
The big issues are funding and its cousin resources. We don't have enough institutes (and equipment), enough researchers, and enough funding. Its these that force us to be more selective in most cases.That is why Lipkin makes those kinds of comments. It increases cost and time and resource use to filter out noise from a mixed cohort.
I don't believe, Alex, that Dr. Lipkin makes his comments concerning selection criteria because “we don't have enough institutes (and equipment), enough researchers, and enough funding.” I believe he makes them because he understands the value of good research design. “We have the best tools to do the work and the funding required to pursue it,” said center director Dr. Ian Lipkin, “and will bring the very best possible minds to the problem irrespective of institution. We will be taking a broad, open-minded approach to the problem” (http://trialx.com/curetalk/2011/11/...equencing-and-proteomics-to-hunt-cfs-viruses/).

I don't understand what you mean by “broader subgrouping/clustering approaches.” I hope you're not suggesting that we use cluster analysis in the context of broadly-inclusive case definitions as the preferred subgrouping methodology.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Ember, I suspect you are arguing at a tangent to my argument, and hence misunderstanding the purpose. Broad definitions have value in an appropriate research setting. The way they are being used is however not appropriate.

If Lipkin had unlimited funds, and unlimited resources, I think he would indeed be doing what I suggest, in addition to many other lines of enquiry. When you are talking of good research design, there is an implicit issue of cost efficiency, where cost covers not just money but the other things I mentioned. Its also restrictive in what it can uncover.

As I said before this is a yin-yang argument. We focus on highly selective (reductionist) research because its cost-effective. My argument is though that its not always outcome effective. With enough resources you can do so much more. In time I think this will become the norm as the approach I am suggesting lends itself to extensive automation.

When developed enough it could make it the very best approach for many problems, far more cost effective. Its just that we have to develop the tools, which requires resources. The human genome project was uber-expensive. This will be more expensive than that. That doesn't mean it can't be pursued - we are in fact doing that also. It just means its an expensive road to take, even if it holds more promise for more of us in the long run.

The current use of clustering in proteomics for patients with similar diseases like CFS and post-Lyme are a case in point. If you restrict the patient cohort too much you may lose many of the pathways, stifling research for another generation. If you loosen the definition too much you introduce too much noise and increase the resources required to solve the problem. Its a question of balancing the two criteria. I suspect we will focus on becoming more reductionistic, but if that happens we may well miss a big piece of the answer.

Bye, Alex
 

Enid

Senior Member
Messages
3,309
Location
UK
Interesting discussion going on here - having had brain MRI's (high spots) lumbar puncture and nerve conduction tests (the most painful) with bafflement and slamming shut of my files - it is indeed good to see these respected researchers pursuing/identifying these abnormalties. Personally I still think viral - degrees of damage done or ongoing. I'd be difficult to "group" though since the whole range of ME symptoms (or clustering prevelance) seemed to appear at different stages. Just one of the many problems for researchers no doubt.