What CFS definitions are those who are doing the studies using? A good study depends on having a well defined patient group. Otherwise what appears as "good" studies, if with mixtures of different patient groups may be "ruined" studies
Nielk I am commenting on what Cort said not on what the actual offer is. Cort said that "you put a dollar in for M.E./CFS research and the McGrath FF will match it with three more dollars", which is not correct.
Cort - thanks so much for highlighting this terrific deal! Thanks McGrath Family for caring about us!
I just donated :victory::victory::victory: (3 victories for the price of one )
Right now, it is up to us patients and friends and family members to bring on the change. Sadly, federal funding for ME/CFS research is ludicrously low. In Sweden, there is no budget at all. In the USA, the NIH, the worlds largest medical research funder, spends a smidgen on ME/CFS. A normal NIH budget for comparable diseases is around $150 million per year. For example, around $130 million per year is directed at MS the figure for ME/CFS is $5 million per year. That is around 30 times less than it should be!
This means that in one year, MS researchers can get done what it takes ME/CFS researchers 30 years to accomplish. No wonder there are no biomarkers or effective treatments for ME/CFS. No wonder there is such little understanding of underlying disease processes. In the past 25 years, biomedical ME/CFS research has come up with a number of interesting findings and launched several possible theories of disease process, as well as potential subgroups. There has been no lack of promising leads and exciting possibilities. But all too often, the initial pilot studies have not been followed by larger studies or independent confirmation, due to lack of funding. Many leads have been left without proper follow-up. In the meantime, our lives go by. As a patient, I cant just sit and let this happen.
Regardless of the sophistication of any algorithm any analysis is only going to be as good as the data inputed (GIGO). ME/CFS research is frequently dismissed for being poorly designed; too small; inconsistent; not replicated etc and frequently confounded by the various case definitions which would make any 'Cochrane Review' type sifting process pretty redundant as 'gold standard' research will be few and far between (or in the case of the PACE trial would conform to the RCT gold standard but fail every test of external validity).I wonder how they will ensure the quality of the research being considered? Fukuda criteria only would exclude Oxford etc studies but Jason's work suggest Fukuda cohorts will include a proportion of confounding patients. Is size of cohort or non-replication a deal breaker? Given the state of research funding we are lucky to see any studies regardless of size and even less likely to see attempts to repeat and replicate.Will they just throw the data at the program to see what emerges or will it be a 'Bayesian' type guided search that tests the degree to which the data fits a prior hypothesis?I'd be curious to see the details.In the meantime something similar has already been done when CDC made available a collection of data on the Wichita cohort :a computer program that develops novel theories for CFS using existing research
One paper looked at biological pathways rather than single genes.Their conclusions (despite the Wichita cohort) were interesting :Following the 2005 Cold Spring Harbor - Banbury Center CFS Computational Challenge (C3) Workshop, CDC provided data sets from the Wichita in-hospital clinical study to Duke University for use in the Sixth International Conference for the Critical Assessment of Microarray Data Analysis (CAMDA 2006). Duke University founded CAMDA to provide a forum to critically assess different techniques used in microarray data mining. CAMDAs aim is to establish the state-of-the-art in microarray data mining and to identify progress and highlight the direction for future effort. CAMDA utilizes a community-wide experiment approach, letting the scientific community analyze the same standard data sets. Researchers worldwide are invited to take the CAMDA challenge and those whose results are accepted are invited to present a 25 minute oral presentation. The 2006 CAMDA was the first to use a single common challenge data set, which contained all clinical, gene expression, SNP, and proteomics data from the Wichita clinical study.
http://www.liebertonline.com/doi/pdf/10.1089/cmb.2007.0041our method identified exactly one pathway as statistically significantthe ADP-ribosylation pathway
Biologically, it is known that ADP-ribosylation is involved in DNA-repair, apoptosis and disease response(Oliver et al., 1999; Vispe et al., 2000). It has been hypothesized that oxidative stress might play a majorrole as possible cause for CFS. Recently, there are several studies available supporting this hypothesis(Jammes et al., 2005; Kennedy et al., 2000). Because oxidants and free radicals damage DNA oxidativestress could affect the ADP-ribosylation pathway to initiate DNA-repair to counterbalance the destructiveinfluence of oxidative stress. This gives a plausible involvement of the ADP-ribosylation pathway in thecontext of the aforementioned hypothesis.
Great funding initiative and an interesting research agenda including this one :
Regardless of the sophistication of any algorithm any analysis is only going to be as good as the data inputed (GIGO). ME/CFS research is frequently dismissed for being poorly designed; too small; inconsistent; not replicated etc and frequently confounded by the various case definitions which would make any 'Cochrane Review' type sifting process pretty redundant as 'gold standard' research will be few and far between (or in the case of the PACE trial would conform to the RCT gold standard but fail every test of external validity).I wonder how they will ensure the quality of the research being considered? Fukuda criteria only would exclude Oxford etc studies but Jason's work suggest Fukuda cohorts will include a proportion of confounding patients. Is size of cohort or non-replication a deal breaker? Given the state of research funding we are lucky to see any studies regardless of size and even less likely to see attempts to repeat and replicate.Will they just throw the data at the program to see what emerges or will it be a 'Bayesian' type guided search that tests the degree to which the data fits a prior hypothesis?I'd be curious to see the details.In the meantime something similar has already been done when CDC made available a collection of data on the Wichita cohort :
One paper looked at biological pathways rather than single genes.Their conclusions (despite the Wichita cohort) were interesting :
http://www.liebertonline.com/doi/pdf/10.1089/cmb.2007.0041
Apologies for the digression.