1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Join the National PR Campaign for ME: Power to the Patient (P2tP)
Have you had enough of all the neglect and abuse of ME/CFS patients? Gabby Klein says now is the time for a National PR Campaign for ME/CFS to impress a change. Join the Patient Revolution to restore power to ME patients ...
Discuss the article on the Forums.

Invest in ME London conference 2012

Discussion in 'General ME/CFS News' started by Kate_UK, Oct 12, 2011.

  1. alex3619

    alex3619 Senior Member

    Messages:
    7,501
    Likes:
    12,000
    Logan, Queensland, Australia
    This is only correct if using old technology. It would certainly make it easier to have well defined cohorts. However, with a well designed, well funded and well resourced study then its possible to take any group and start working on biomarkers. Its about using clustering methods to group the patients. Of course the better the cohort the easier this will be, but its not impossible. If the Oxford definition people wanted to, and funded a really good study (presuming they could design such a study) then they could find biomarkers. They just prefer not to, and have even dropped talking about the immune biomarkers they have identified in the past. Bye, Alex
     
  2. floydguy

    floydguy Senior Member

    Messages:
    650
    Likes:
    238
    Er what's the difference between mixed cohorts and "grouping" using "clustering methods". is this just semantics? It seems like either way you have to sub-group at some point whether it's on the front-end or the back-end to match bio-markers with groups of people. Does anybody really think there is ever going to be one bio-marker for this heterogeneous population?
     
  3. Bob

    Bob

    Messages:
    8,591
    Likes:
    11,519
    South of England
    If a biomarker study is carried out, then cohorts can be created using just the biomarker data.
    The biomarker data can separate a heterogeneous group into a number of homogeneous groups.
    So, just by using biomarker data, it could be possible to separate CFS patients into different cohorts for researching different treatment approaches.
    That's the theory anyway.
     
  4. Mark

    Mark Acting CEO

    Messages:
    4,528
    Likes:
    2,004
    Sofa, UK
    I concur. Cluster analysis should be able to identify subsets of mixed cohorts, and there seems to have been some progress in this respect already. Baranuik's presentation suggested that his group has made significant progress in this respect, both based on questionnaire data and confirmed by spinal fluid analysis. Even with the PACE trial data, analysis by independent researchers should be able to help with subtyping and with determining who (if anyone) is helped by CBT and GET. That's why it's so important for detailed data to be collected, made publicly available, and subjected to cluster analysis with the latest data analysis techniques. The PACE authors collected a wealth of data, and they need to be compelled to comply with their legal obligations and the requirements of their research council funders, and release that data so that it can be analysed by other researchers. And presumably their fears over what that open analysis would reveal are the explanation of why they have failed to comply with those requirements.
     
    alex3619 and Enid like this.
  5. floydguy

    floydguy Senior Member

    Messages:
    650
    Likes:
    238
    I thought we couldn't just go around poking people in spine :).
     
  6. Mark

    Mark Acting CEO

    Messages:
    4,528
    Likes:
    2,004
    Sofa, UK
    I guess some researchers can. :)

    Yes, the spinal fluid analysis isn't very practical as a test for all patients (although I have been told by somebody who had one done that the pain of the procedure is water off a duck's back compared to the day to day pain of moderate/severe ME). But as a means of subgrouping on a biomedical basis, it's very promising. Baraniuk suggested that he was seeing a consistency between the cluster analysis based on questionnaire data and the clustering of spinal protein signals. If further investigations clarify those subgroups and confirm them with this objective signal, it may be possible to define those subgroups accurately using only questionnaires, which would amount to the construction of accurate, biomedically-based definitions of distinct conditions within the ME/CFS patient population. This would mean criteria analagous to Fukuda, ICC, CCC, etc, which are biomedically validated. It will be fascinating to see, when such cluster analyses come to fruition, how accurate the existing definitions are in defining those different subgroups.
     
    Bob likes this.
  7. Ember

    Ember Senior Member

    Messages:
    1,728
    Likes:
    1,782
    If new technology is powerful enough to compensate for poor research design, then why is Ian Lipkin so concerned about his cohorts? He and Dr. Hornig have been quoted concerning the Lipkin/Hornig study:
    Amy Dockser Marcus quotes Dr. Lipkin concerning the XMRV study :
     
    Enid likes this.
  8. Bob

    Bob

    Messages:
    8,591
    Likes:
    11,519
    South of England
    The quality and type of patient selection is essential in all studies.
    But the type of research we have been discussing could, in theory, supersede the need for the current diagnostic criteria, by creating new bio-medically-based criteria.
    For example, if we could do a blood test in order to diagnose patients for subgroups of 'CFS', or 'ME', then we wouldn't need the current diagnostic criteria, which are based on signs and symptoms.
    All the biomarker studies are ultimately working towards a diagnosis based on a blood-test (or other similar tissue tests.)
     
  9. SOC

    SOC Senior Member

    Messages:
    5,362
    Likes:
    6,424
    USA
    I see. The idea is to group according to symptom severity. I guess I was confused because I would fall into the "eclectic" category and haven't fully internalized that many patients are strongly in one symptom grouping. Although with symptomatic treatments affecting many symptoms it's hard to know (and I don't want to know) what my illness would look like without them.

    The chronic infections that we are battling may group us, but I think it's more immune problems that are at the root and which chronic infection we have may be more related to exposure than anything specific to the patient.

    I suppose the bottom line is -- we won't know if there's clear groupings that would suggest different illnesses unless we look for them :)
     
    heapsreal likes this.
  10. SOC

    SOC Senior Member

    Messages:
    5,362
    Likes:
    6,424
    USA
    [my bolding]

    I wonder what the bones of contention were. There must have been somebody hanging onto lesser criteria rather than using the ICC. The CDC folks, maybe?

    I also wonder what the "widely used, symptom-based definitions of CFS" are? Just because they're widely used doesn't make them good definitions. It seems stupid to use a ridiculously broad definition like Reeves or Oxford. So they probably used Fukuda and.... what?
     
  11. Ember

    Ember Senior Member

    Messages:
    1,728
    Likes:
    1,782
    I think you missed my point, Bob. To find biomarkers, you need research, and the criteria are key to that research. “The key to maximizing the outcomes of these tests is the criteria of the patients selected, according to Dr. Lipkin.”
     
  12. Sing

    Sing Senior Member

    Messages:
    1,310
    Likes:
    430
    New England
    I never had careful, thorough workups and testing done. Medicine on the cheap where I live, and no experts in ME/CFS.
     
  13. Bob

    Bob

    Messages:
    8,591
    Likes:
    11,519
    South of England
    I understood what you meant Ember.
    We were discussing other complex types of biomarker studies, such as proteomics, and genetics. (Not what Lipkin is doing.)
    If you take a heterogeneous cohort of CFS patients, then it is possible that such a study might be able to divide the patients up into homogenous cohorts based on, for example, up-regulation of genes, or a type of protein abnormality.
    However, I am talking about theory. In practise, it's best to use diagnostic criteria that are as selective as possible.
    But even Lipkin could use his pathogen results to sub-divide his selection of patients.
     
  14. floydguy

    floydguy Senior Member

    Messages:
    650
    Likes:
    238
    So you don't really know? That's the problem we have is that there really isn't much confirmation for so much that goes on with us.
     
    Sing likes this.
  15. Ember

    Ember Senior Member

    Messages:
    1,728
    Likes:
    1,782
    Can you clarify further? The title of the article is "Dr. Ian Lipkin and Dr. Mady Hornig, Use Deep Sequencing and Proteomics to Hunt CFS Viruses." If Dr. Lipkin's proteomics uses new technology, and the criteria are key here, then why are they not key to the complex types of biomarker studies (such as proteomics and genetics) that you were discussing?
     
  16. alex3619

    alex3619 Senior Member

    Messages:
    7,501
    Likes:
    12,000
    Logan, Queensland, Australia
    Its not just semantics floydguy, subgrouping is indeed critical but my point is it can be done under any definition, its just harder under very heterogenous conditions - particularly if you have no idea of criteria for subgrouping to start with. I see it as an iterative process. By using less heterogenous groups the first few loops of the iteration can be bypassed, at least for that kind of patient. However all this requires funding and expertise. We don't have enough funding, enough expertise or a sufficiently wide expertise generally to really go down this path properly - what we are doing instead, for the most part, is focussing on existing potential biomarkers. Every now and again though a new study comes along that adds some more, like the spinal proteomic study.

    Focussing on specific candidate biomarkers has the advantage that we can get more bang for the buck, but has the disadvantage that we might be missing out on a lot. Its all a muddle really, if we had multiple and sizeable research institutes over the world, well funded, this would be much easier.

    There may well be biomarkers that are universal, I cannot rule that out. They will however never be diagnostic because they will not apply only to specific diseases. Such a marker might be confirmatory in conjunction with other markers, but it has no great use in diagnostics. It might however be used to assess recovery, presuming of course that such a marker exists.

    Bye, Alex
     
  17. alex3619

    alex3619 Senior Member

    Messages:
    7,501
    Likes:
    12,000
    Logan, Queensland, Australia
    Hi Ember, its key only because its the easiest way to do it - faster and cheaper. Since funding is always an issue, that is a problem. However I do suspect that protein marker clustering is one solution that will appear in a few years. They have to map proteins to genes to symptoms etcetera and then validate them. They then have to make comparisons with other groups, such as MS, RA and depression. At that point it will become a robust set of markers.

    Specific focussed studies are good reductionist science. They are also a good way to miss out on seredipitous discoveries. Who knows what they miss by doing this? Its a yin-yang thing - the more focussed you are the cheaper and easier things become, the less focussed you are the bigger the chance for accidental discovery of something that might otherwise be missed.

    The big issues are funding and its cousin resources. We don't have enough institutes (and equipment), enough researchers, and enough funding. Its these that force us to be more selective in most cases.That is why Lipkin makes those kinds of comments. It increases cost and time and resource use to filter out noise from a mixed cohort.

    Where broader subgrouping/clustering approaches will get their day is that such techniques are generalizable. The technology could come from cancer or MS or ebola research - and then we simply re-apply it to our favourite condition. This is about using computing power to substitute for thousands of hours of lab study. This kind of thing made a century long genome mapping into a decade. It used to take a very long time to map even a single gene or protein. Now we have machines that do the job so fast we can process very large numbers in a single day. The same thing could happen for proteomics, especially when the technology to interact with different databases is developed. I don't know when this will be, but it seems likely to happen at some point.

    I am of course biased, given that I have an artificial intelligence background. ;)

    Bye, Alex
     
  18. Ember

    Ember Senior Member

    Messages:
    1,728
    Likes:
    1,782
    I don't believe, Alex, that Dr. Lipkin makes his comments concerning selection criteria because “we don't have enough institutes (and equipment), enough researchers, and enough funding.” I believe he makes them because he understands the value of good research design. “We have the best tools to do the work and the funding required to pursue it,” said center director Dr. Ian Lipkin, “and will bring the very best possible minds to the problem irrespective of institution. We will be taking a broad, open-minded approach to the problem” (http://trialx.com/curetalk/2011/11/...equencing-and-proteomics-to-hunt-cfs-viruses/).

    I don't understand what you mean by “broader subgrouping/clustering approaches.” I hope you're not suggesting that we use cluster analysis in the context of broadly-inclusive case definitions as the preferred subgrouping methodology.
     
  19. alex3619

    alex3619 Senior Member

    Messages:
    7,501
    Likes:
    12,000
    Logan, Queensland, Australia
    Hi Ember, I suspect you are arguing at a tangent to my argument, and hence misunderstanding the purpose. Broad definitions have value in an appropriate research setting. The way they are being used is however not appropriate.

    If Lipkin had unlimited funds, and unlimited resources, I think he would indeed be doing what I suggest, in addition to many other lines of enquiry. When you are talking of good research design, there is an implicit issue of cost efficiency, where cost covers not just money but the other things I mentioned. Its also restrictive in what it can uncover.

    As I said before this is a yin-yang argument. We focus on highly selective (reductionist) research because its cost-effective. My argument is though that its not always outcome effective. With enough resources you can do so much more. In time I think this will become the norm as the approach I am suggesting lends itself to extensive automation.

    When developed enough it could make it the very best approach for many problems, far more cost effective. Its just that we have to develop the tools, which requires resources. The human genome project was uber-expensive. This will be more expensive than that. That doesn't mean it can't be pursued - we are in fact doing that also. It just means its an expensive road to take, even if it holds more promise for more of us in the long run.

    The current use of clustering in proteomics for patients with similar diseases like CFS and post-Lyme are a case in point. If you restrict the patient cohort too much you may lose many of the pathways, stifling research for another generation. If you loosen the definition too much you introduce too much noise and increase the resources required to solve the problem. Its a question of balancing the two criteria. I suspect we will focus on becoming more reductionistic, but if that happens we may well miss a big piece of the answer.

    Bye, Alex
     
  20. Enid

    Enid Senior Member

    Messages:
    3,309
    Likes:
    840
    UK
    Interesting discussion going on here - having had brain MRI's (high spots) lumbar puncture and nerve conduction tests (the most painful) with bafflement and slamming shut of my files - it is indeed good to see these respected researchers pursuing/identifying these abnormalties. Personally I still think viral - degrees of damage done or ongoing. I'd be difficult to "group" though since the whole range of ME symptoms (or clustering prevelance) seemed to appear at different stages. Just one of the many problems for researchers no doubt.
     

See more popular forum discussions.

Share This Page