• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Kim McCleary: Open letter to CFS community

fresh_eyes

happy to be here
Messages
900
Location
mountains of north carolina
...would they at least consider writing about this issue in the abstract now (online or in print), to prepare people for the effect flawed criteria might have on any ME/CFS study (not just XMRV studies)?

I think you mentioned in another thread that you're a PWC so please take your time answering all of this if you're not feeling too flash.

Good idea, catch. We also have a lot of very well-informed and articulate writers around here. Perhaps we could draw something up that addresses the issue of flawed CFS criteria *in general*, and get CAA's stamp of approval on it? We don't expect you to do all the work, j-spot! (Do you mind if I call you j-spot? :D)
 

jspotila

Senior Member
Messages
1,099
Answering a few questions

My understanding is that some information about the patient cohort in the WPI study has been made available, but not all.

if the CAA are reluctant to single out particular studies, would they at least consider writing about this issue in the abstract now (online or in print), to prepare people for the effect flawed criteria might have on any ME/CFS study (not just XMRV studies)?

The Association and others have been commenting on this since October. Both Dr. Peterson and Dr. Coffin told the CFSAC that patient characteristics were critically important. Dr. Peterson said, "along with validated assays we have to validate the clinical group that we are looking at and I cant make that point strongly enough, especially in publishing validity studies. If I dont understand what patients they were looking at, it will really mean little to me as a clinician. Dr. Coffin said, I completely agree with Dr Peterson that validating assays with a very well defined set of samples from a well defined set of patients, where you know everything you really need to know about them and then using that as a benchmark for the quality of your assay sensitivity, specificity and so on, also well defined sets of controls I might point out, is really critical to being able to do these studies in a meaningful way.

The Association will continue behind the scenes and in front of them to advocate for XMRV and other CFS research to be done in the highest quality way. We were sharply criticized for asking these questions in October!

Do you mind if I call you j-spot?

Hee! Nope! I've been called worse! ;)
 
Messages
33
I was under the impression that the cohort characteristics were available, just not published in the original article, and that WPI has since provided those to CAA and other researchers. Does anybody know for sure? Jspotila?

The online supporting materials are available here by clicking on "Supporting Online Material" on the left side of the page under Article Views:

http://www.sciencemag.org/cgi/content/abstract/1179052?ijkey=m3wzKT4yJqEyk&keytype=ref&siteid=sci

The link downloads an 18 page pdf file containing the materials and methods, cohort characteristics, etc.
 
A

ABarker

Guest
XMRV cohort

In the "Supporting Online Material" for the study, there is only 1 paragraph that mentions patient samples. Even then, it is vague. It does not include when the samples were collected, nor where. It states that all patient samples meet the 1994 Fukuda Criteria, as well as the 2003 Canadian Consensus, but how sick were the 101 patients? From what I've gathered, they were the sickest of the sick, kind of cherry-picked.

If this is the case, I sure hope that WPI shares their sampling method with any replication effort, or the 67% rate of XMRV infection could diminish greatly.
 
K

Katie

Guest
In the "Supporting Online Material" for the study, there is only 1 paragraph that mentions patient samples. Even then, it is vague. It does not include when the samples were collected, nor where. It states that all patient samples meet the 1994 Fukuda Criteria, as well as the 2003 Canadian Consensus, but how sick were the 101 patients? From what I've gathered, they were the sickest of the sick, kind of cherry-picked.

If this is the case, I sure hope that WPI shares their sampling method with any replication effort, or the 67% rate of XMRV infection could diminish greatly.

From what I understand from the CFSAC they first looked for XMRV in the sickest or the sick because you had better chances of finding it if there's lots to find. Then when they found it they saw reason to undertake the study which was published in Science. They took samples of varying health that still met both criterias and then matched them to healthy samples by area codes.

The sickest of the sick quote only relates to confirm their first theory that the XMRV retrovirus might be involved as far as I understand.

I'm sure the WPI will share everything they should, afterall, it is in their interests for this to be soundly replicated. It's hard, but we just have to sit and keep ourselves amused until the replication studies start to flow. We'll get good news and bad, it's a big thing to unravel. I for one am not finding it easy to be the patient patient but luckily this place is keeping me busy :D
 
Messages
5,238
Location
Sofa, UK
In the "Supporting Online Material" for the study, there is only 1 paragraph that mentions patient samples. Even then, it is vague. It does not include when the samples were collected, nor where. It states that all patient samples meet the 1994 Fukuda Criteria, as well as the 2003 Canadian Consensus, but how sick were the 101 patients? From what I've gathered, they were the sickest of the sick, kind of cherry-picked.

If this is the case, I sure hope that WPI shares their sampling method with any replication effort, or the 67% rate of XMRV infection could diminish greatly.

There are some good threads back in the history (in the XMRV section) that go into all these questions in considerable depth; we have delved and mined for every last clue we can find. They'd be well worth looking for if you're worried about this issue.

Katie's response is spot on as always, but there's so much more to be said to put your mind at rest on this front; more than I have time to say now. It's normal practice for a first study to look at the sickest patients in order to get clear results. However, in this case, the information that's trickled out about the cohorts suggests that wasn't particularly strongly the case here. I don't believe it's the case at all that they cherry-picked extremely sick cases; yes they used clear and strong criteria for CFS and may conceivably have had some other biases, but they have indicated that they believe, from all their research in the round (including lots of unpublished findings), that this finding will hold good for most of us. The 67% is just the published figure; the 95% and 98% that have been found since the study, when including culture and other tests, paints a different picture. If virtually all of the sickest patients have XMRV, that actually suggests that many of the less sick are likely to have it too. Since the milder cases have similar symptom patterns, it's highly unlikely that there are two different conditions here, one affecting all the very sick people and another different one affecting all those who are less sick. Much more likely to be a spectrum IMO.

Questions about the sampling method being shared with other researchers are missing the point also I think, because both the researchers and campaigners on this board are continuing to encourage replication studies to use the right selection criteria - the problem isn't the failure to share that information, but rather the selection criteria and the dubious nature of the cohorts that studies such as the CDC's and the major UK studies are going to insist on using (check the threads on replication studies for loads more on this). This dispute has a long and murky history, and if there's any blame to be applied, it's not with the WPI for failing now to explain - yet again - to everyone else how they should do their studies in order to get the right cohorts. As they've said many times, the best advice would be to study people with CFS - that would be a good start - and while they're not prepared to do that, any reluctance to help them further would be quite understandable. Too much help risks giving the powers that be more ammunition to help them protect their long-established interests. That's perhaps why the WPI are playing things carefully.

Some of the studies will replicate the findings and others will fail to various degrees. The truth of it is that the ones that fail will just be measuring and showing up exactly how bad their own selection criteria and CFS definitions are! These are the people who bear so much responsibility for the failings in the management and research for CFS, so when one of those people comes along asking detailed questions about how to do the science properly, it wouldn't be surprising if the help they receive from the WPI depended on the history of that person's attitudes to CFS. In most cases, a study raising these questions and asking for help with replication studies, or more explanation of the details of the study (which weren't and aren't required for publication by Science) deserves the answer: "I refer the honourable gentleman to the reply I have been trying to make him listen to for the last 20 years". If they really want to know, they can look it up! All they have to do is think of all the people they've been ignoring for decades, and go back and actually read what they said.
 
Messages
5,238
Location
Sofa, UK
I do not know if CDC is using the Canadian criteria. We are doing everything we can to address this on the front end of studies, but again - if a researcher does not want to take the Association's advice, we can't control that.

Being from the UK, in a way it's not for me to comment on these issues and I know next to nothing about the CAA so I have no prejudice about it. But based on the views of some people I highly respect on these boards, what many have said they would like the CAA to do, is to apply some really, really firm pressure on this issue. If the CDC refuse to say what criteria they are going to use, then almost any cards available should be played, and if and when it emerges they are using dubious criteria, lots of us would love to see the CAA publicly condemn that and say in advance that if the study fails, it will be no surprise, and explain why. One forum member described this approach as 'innoculation'. The CAA is suddenly in a very powerful position, and I'd have thought the CDC would really fear what the CAA could say about them right now; surely that's a card that can be played?

Lots of reasons for this. The fear that a let-down could have demoralising effects on the patient community and set the scene for a political battle royale; and the importance of defining one's position solidly so that one isn't accused of making up one's mind in retrospect, are two that spring to mind.

Based purely on reading some of the threads about the CAA, the impression I get is that the CAA does a lot of good work, but (like the big UK organisations) its size and breadth probably constrain it from speaking out as firmly as many would like. This appears to be the time to take a very bullish mentality wrt the CDC in particular. It's still possible to take a very clear and hard line while at the same time protecting one's reputation should things not turn out the way you expect.

As I say, maybe it's none of my business, and I've heard enough to believe that the CAA is working really hard on this, so I'm only wanting to support those efforts, not to criticise, and I'll butt out now. But as with the US elections, while it's seemingly not for me to say, it does appear that it's likely to have an enormous affect on my life, so I feel justified in having my say when I have the chance.
 

Dr. Yes

Shame on You
Messages
868
If the CDC refuse to say what criteria they are going to use, then almost any cards available should be played, and if and when it emerges they are using dubious criteria, lots of us would love to see the CAA publicly condemn that and say in advance that if the study fails, it will be no surprise, and explain why. One forum member described this approach as 'innoculation'. The CAA is suddenly in a very powerful position, and I'd have thought the CDC would really fear what the CAA could say about them right now; surely that's a card that can be played?

Thanks Mark, for strongly making a point that I've been too exhausted to keep making myself. I think your impressions are pretty accurate! And I also think you folks in the UK have, if it's possible, even greater vested interest in American advocacy and proper replication of the XMRV study, etc... After all, the "CFS" mess you have is a legacy of the CDC; unfortunately you have the additional problem of a powerful psychiatric lobby hijacking your entire NHS. They have allies here in Reeves and others at the CDC (or vice versa), so I think it's good common sense for us to work together on this issue!
 

jspotila

Senior Member
Messages
1,099
Importance of cohort selection, etc.

I think this article excerpt is relevant to what we've been discussing about the cohort selection and whether all the necessary information is public:

Joseph DeRisi, a molecular biologist at the University of California, San Francisco, who co-discovered XMRV, was not satisfied with details in the paper: He wanted to know more about the viral load in CFS patients and how the demographics of the control group matched that of CFS patients.
<snip>
At the least, a double-blind study where a third-party lab searches for XMRV in CFS patients and in controls is vital, he says.

That's from Sam Kean's article "Chronic Fatigue and Prostate Cancer: A Retroviral Connection?" in the October 9, 2009 edition of Science. The full text was posted on Co-Cure here:http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0912C&L=CO-CURE&P=R971&I=-3
 

Advocate

Senior Member
Messages
529
Location
U.S.A.
I think this article excerpt is relevant to what we've been discussing about the cohort selection and whether all the necessary information is public...from Sam Kean's article "Chronic Fatigue and Prostate Cancer: A Retroviral Connection?" in the October 9, 2009 edition of Science. The full text was posted on Co-Cure here:http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0912C&L=CO-CURE&P=R971&I=-3

In the article you quoted, Sam Keane wrote that Joe DeRisi wanted "to know more about the viral load in CFS patients."

It's not surprising that Joe DeRisi wanted to know more about the viral load in CFS patients. That's what DeRisi does. I love Joe DeRisi, and I hope he gets to work on it! Like DeRisi, we'd all like to know more about the viral load in CFS patients. But that's not what the Mikovits paper set out to do.

The Sam Kean article you quoted also said DeRisi wanted to know "how the demographics of the control group matched that of CFS patients."

Oh, c'mon. There's a ton of information about the cohorts. But some people won't be satisfied until they get the names and addresses of every person who contributed samples for the study.

Keane also wrote "...that the [Mikovits et al] group is not claiming to have identified a cause." Yes, this is true. However, DeRisi seemed to ignore this truth when he said that "many claims have been made in the past" to find a cause; thus he subtly criticized the Mikovits paper by innuendo.

If people want to attack the paper, I wish they wouldn't do it by innuendo. It was the innuendo by Suzanne Vernon and others that was so upsetting to many people.
 

fresh_eyes

happy to be here
Messages
900
Location
mountains of north carolina
Seems like we still don't know how much info on the cohort is available (we know how much was in the original paper, but not how much more has been made available since). Perhaps we should ask Wanda Jones? She always seems able and willing to answer questions.

If someone else wants to write to her...I just sent her another question yesterday! :eek:

wanda.jones@hss.gov
 

jspotila

Senior Member
Messages
1,099
Are there some other behind-the-scenes factors that might prevent a comment about the criteria being made js? Dr Vernon is probably really busy and still trying to negotiate with other researchers which might make this a delicate topic, but once she gets time and once it's clear that further negotiation will not work, could you perhaps raise the criteria issue with her?

Both Dr. Vernon and Dr. Judy Mikovits are members of DHHS's XMRV Working Group, but the discussions and information of that Group are confidential. The Association can't disclose the information learned from that source except as permitted in the confidentiality agreement.

CDCs studies have apparently been rolled up into the Working Group that is using CFS samples collected by WPI. There are several other follow-up studies being done by academic research groups in the U.S. and other countries, but details of those studies (including how patients are being selected) are not being discussed publicly.

The last complicating factor is the embargo placed on any scientific paper before publication. Even if we had information about "Study X" that indicated the patient cohort or other methods were flawed, we could not speak out in advance of publication without violating that embargo.

As data is published, each study will have to be evaluated on its own merits, comparing the patient selection and laboratory methods used to the original study. Since no single method for detecting XMRV (in CFS or prostate cancer) is considered the validated gold standard, there is definitely a possibility for different results from different groups even without the cohort issue thrown in to the mix.

But this is the nature of scientific research in general, not unique to XMRV or CFS. "Study X" is published, then "Study A" and "Study B" are published, and all the data/methods/results have to be compared and analyzed. Obviously, the hardest part for all of us is the WAITING.
 

CBS

Senior Member
Messages
1,522
The last complicating factor is the embargo placed on any scientific paper before publication. Even if we had information about "Study X" that indicated the patient cohort or other methods were flawed, we could not speak out in advance of publication without violating that embargo.

What about a general public statement by the CAA as to the importance of cohort and methods and that any study's results will only be given as much weight as the scientific rigor of the study warrants?

Shane
 

fresh_eyes

happy to be here
Messages
900
Location
mountains of north carolina
What about a general public statement by the CAA as to the importance of cohort and methods and that any study's results will only be given as much weight as the scientific rigor of the study warrants?

And even mentioning that in the past, biomedical CFS research has been...I don't know how to say it exactly, but hobbled by lack of cohesiveness re cohorts and methods, esp. by including many people who do not meet the Canadian Consensus definition.

JS, my sense is that a general statement like this, forcefully worded, could do a lot to ease the community's concerns re CAA. My $.02.
 

Samuel

Senior Member
Messages
221
... a general statement like this, forcefully worded, could do a lot to ease the community's concerns re CAA ...

I don't follow the CAA's page. Are you saying that the CAA hasn't inoculated yet? If not, could somebody elaborate on what circumstances prevented them from doing it on October 8 when reporters could have picked it up?
 

fresh_eyes

happy to be here
Messages
900
Location
mountains of north carolina
I don't follow the CAA's page. Are you saying that the CAA hasn't inoculated yet? If not, could somebody elaborate on what circumstances prevented them from doing it on October 8 when reporters could have picked it up?

That's a complicated question, samuel, and I guess it's a matter of opinion. You might look over this thread and the "Time for the Big Question" thread.
 

CBS

Senior Member
Messages
1,522
What is 'Patient Advocacy?'

Given the history of poorly designed and executed research in CFS and the needless delays, suffering and DEATH caused by the confusion wrought by bad design passed off as science, it is my very sincere belief that ANY patient advocacy group must state, as a matter of principle and a show of allegiance with patients, clear and concise standards by which any research intended, to illuminate any aspect of this disease, will be judged.
 

jspotila

Senior Member
Messages
1,099
Thanks for elaborating on the confidentiality issues js. I agree with Shane and fresh_eyes, it sounds like the ideal solution is a general statement which doesn't single out any unpublished studies or disclose information that has not already been made public.

A general statement would allow Dr Vernon to explain what a true replication attempt would involve (Canadian/Fukuda criteria, immune abnormalities, same method) and what a poor replication attempt would look like ('empiric', Oxford or other broad criteria, using detection methods that have not been proven sensitive to XMRV).

Would you be able to bring this request to the board or to Dr Vernon herself js?

Dr. Vernon made these statements on October 15th, and the statement was further revised after the CFSAC meeting to include information from Dr. Peterson's presentation:
This Science paper tells us that XMRV plays a possible role in CFS pathogenesis in these CFS patients. How much we can generalize these findings to other CFS patient populations? That answer will depend on the results of replication studies.

The design of replication studies should include CFS patients who are similar to those reported in the Science study. Dr. Peterson reported at the Oct. 29 CFS Advisory Committee meeting that the 101 patients in the study were drawn from CFS practitioners in Nev., Calif., Ore., Fla., N.C., and N.Y. They ranged in age from 19 to 75 with a mean age of 55. Sixty-seven percent were female. The controls were age, sex and zip-code matched and were not contacts of the patients studied, nor were they lab workers.

Methods used in independent replication studies should also follow the WPI protocol and use similar reagents. We are actively working with several independent research groups in the U.S. and other countries to expedite these studies.
(emphasis added)

The Association is working very hard, both through the HHS Working Group and independently, to facilitate true replication studies that use the same cohort characteristics and methods that were used by WPI. My understanding is that the HHS study will include samples from WPI.