• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

XMRV CFS UK study #II

G

Gerwyn

Guest
Yes, if the control testing was not as rigorous, they would need to make the statistical adjustment I worked out above. That would require a change to their results and invalidate their conclusions. However, if controls were run blind in the same exact batches as CFS cases, and multiple draws from each control were used, then you are correct and no adjustments would be needed as the basis for measure would be by the person and not by the tubes run. The only reason I brought this up is that controls are often from a convenient sample, which usually would include only one draw, and in this case there were many more controls than CFS cases, suggesting they were a convenient sample and may have indeed been handled differently.

Another way to adjust for this (if controls were handled single-pass) would be to take only the first measure from each CFS case, which would reduce the positives by 16X, so the real positive rate for CFS samples would be 67%/16 = 4.2%. Not much higher than the controls.

As for making an adjustment without equal numbers of tubes from controls, you would have to know the rate of false negatives in the controls and what causes false results. To find that out they would have to take a group of controls and test them also 16 times just like the CFS cases. Without knowing the false result risk there is no way to make a fair adjustment and it is imperative that controls be run blind alongside the CFS samples. IF that was not done, they have to make an adjustment like I have described. Would be nice to know if that was done.

They would certainly have to use the same methods in the controls.I would be very suprised if they did not. I dont think that would have survived the science peer review process.Their methods are what you expect if they were looking for a virus mostly in its latent phase.The Methodology in the IC and Groom studies were juvenile at best.The control methodology in the Groom study was totally bizarre.The detection methodology in the IC study went against all the published protocols for recovering XMRV partcularily in the transfection stage
 

natasa778

Senior Member
Messages
1,774
And what's deal with the positives in the SGUL controls being collected at one blood donation site? Kerr hints at a local outbreak, then doubts it.

Could that be down to a different blood collection technique used at SGUL, as Gerwyn pointed out the exact procedure of drawing and processing blood is crucial for virology purposes. Maybe they somehow did the correct thing at SGUL and messed up with other samples, drawn elsewhere?
 
G

Gerwyn

Guest
Indeed, if the testing applied to the CFS patients were different to, and 16 times more powerful than, that applied to the controls, then this would explain the WPI results, leaving only the mystery of why other researchers round the world are unable to detect any trace of this near-ubiquitous retrovirus.

But do you not think that either one of the WPI team, Dr Coffin, the Science review panel, or somebody at some point would have noticed this flaw?

Is it not frankly inconceivable that the WPI results were based on research in which they looked 16 times as hard for XMRV in CFS patients as they did for controls? Can we really imagine that the people involved in this study could fail to notice this point?

Alternatively, perhaps we should imagine that the WPI did realise that their whole approach was fundamentally dishonest, but they managed to swindle a bunch of scientists into failing to notice that their healthy controls were tested in a completely different way to the CFS patients?

Right from the outset, the WPI reaction to questions like this has had a hint of irritation about it, and from the word go they have reacted such questions by pointing out that their study was reviewed and published by Science. I think they've been quite restrained, actually, in dealing with the more insulting suggestions. If it were me, sarcasm mode would have kicked in at some point, and I would have asked in reply: "Do you really think your theory is so sophisticated that neither we nor the Science editors spotted that error? For some reason, you seem to think we're idiots!".

It's true that there has been an information deficit in certain areas, and perhaps we the patient community have done less than we could have to ask these sorts of questions of the WPI directly. All that detail does need teasing out, if only to head off the obvious potential flaws in the study that so many have alluded to in the absence of a clear description of why they are not possible. I suspect the WPI were hoping that the fact of publication in Science, and all the work they had to do to achieve that, meant that they were now exempt from impertinent questions about whether they had made some utterly basic and childish error. I suspect they were hoping everyone would focus on their findings and their implications, rather than delving into details of their methodology that Science magazine had already pored over for the best part of a year.

But frankly, I do still find these sorts of theories about the WPI study to be offensive and insulting to the WPI researchers. To include a methodological flaw of that magnitude would require either the utmost incompetence or the most profound dishonesty - and we would rightly tear them to shreds if it turned out the whole thing was no more than a giant swindle.

No offence intended in the above to you personally Kurt, your perspective is very valuable on this forum, and your sober notes of caution are extremely helpful for a hothead like me! But I still can't understand how you consider possibilities like this one to be realistic? Wouldn't the WPI have to be either phenomenally stupid or incredibly evil to produce such twisted science? I mean, the scenario you suggest is way beyond any flaw we've ever found in Simon Wessely's work, and with over 500 papers to his name in about 20 years, he's been churning out the studies at the rate of one a fortnight for the last 2 decades!

Agreed in full in all parts and the very fact that they had to use multiple amplification steps makes the Uk methodoly frankly silly
 

Mithriel

Senior Member
Messages
690
Location
Scotland
I haven't gone over the papers again as I don't feel up to it today, but this was my impression.

In the original study, 67% of patients were positive and 3%(?) controls were by PCR.

Once these figures were established, some of the positive samples from both were given extra tests where they looked at the virus in more detail. One patient sample had the entire genome of the virus worked out. They used culture at this stage.

I am sure the virus from the control did not have env, but I would have to check.

After the paper was presented, they did more testing on the patients that were negative. It was at this stage that the four samples came into it. Some patients were negative until they had been tested four separate times. One patient did not test positive by any of this extra testing all the rest had the virus. It may be that she meant they found patients positive by culture but they had to run PCR on four separate samples until PCR matched culture.

They did not have to test the controls in such detail. The difference between patients and controls was established by the original tests and the studies by the blood people and the CDC are looking at healthy people.

This paper was examined and refereed for months. The science was done properly. The Imperial College study was very rushed and I am dubious about it, in part because of the unscientific claims that were made, but the Kerr study did proper science but the design was not ideal.

I can't understand the antipathy to the WPI when everything they did was held up to so much scrutiny but so many well respected scientists.

Mithriel
 

starryeyes

Senior Member
Messages
1,558
Location
Bay Area, California
Professor Malcolm Hooper's opinion of the UK research

We didn't get explicit permission to quote Prof Malcolm Hooper, so I won't quote him directly,
but you can now read his quote on an MEActionUK webpage, here:
http://www.meactionuk.org.uk/xmrv-research-and-the-ramsay-research-fund.htm

(search for the word 'skulduggery', and you'll find his quote)

Thanks Bob. That pretty much sums it up.

I see the question coming up here a lot about whether or not the WPI used the Canadian Definition on all of their test subjects for XMRV. Of course they did! They know that if they didn't they couldn't say for sure that the patient has ME/CFS.
 

kurt

Senior Member
Messages
1,186
Location
USA
After the paper was presented, they did more testing on the patients that were negative. It was at this stage that the four samples came into it. Some patients were negative until they had been tested four separate times. One patient did not test positive by any of this extra testing all the rest had the virus. It may be that she meant they found patients positive by culture but they had to run PCR on four separate samples until PCR matched culture.

They did not have to test the controls in such detail. The difference between patients and controls was established by the original tests and the studies by the blood people and the CDC are looking at healthy people.

This paper was examined and refereed for months. The science was done properly. The Imperial College study was very rushed and I am dubious about it, in part because of the unscientific claims that were made, but the Kerr study did proper science but the design was not ideal.

I can't understand the antipathy to the WPI when everything they did was held up to so much scrutiny but so many well respected scientists.
Mithriel

Mithriel, if this is how WPI worked, and the multiple tests were only for those who initially were negative by PCR then I agree, there was no requirement to test the controls in detail. However this is the exact point that is not clear to me. Where did Mikovitz state that multiple tests were only conducted on those negative by initial PCR? Her statements seem unclear.

Your statement that the paper was examined for several months is correct, however, a paper is not analogous with a study, papers can pass review even when the study has an underlying flaw. The reviewers do not go to the lab and witness how a study is conducted, they rely on what the authors write in the report. If something important is left out of the report, the reviewers may not catch it. If multiple tests were run on some samples and not reported, even as a simple oversight and not with the intention of scientific fraud, that would not be caught in reviews.

As for antipathy to WPI, I have no dislike of WPI, challenging someone's research is an important part of the scientific process and is not personal. In fact, you are dubious of the IC study, is that antipathy or simply scientific objectivity? I am just trying to be fair and make certain we hold WPI to the same standard of scrutiny as every other XMRV study, and that is not antipathy. Some statements have been made that do not add up, I am trying to sort that out. There is no malice in scientific questioning, research must be validated through many means including questioning methods.

They would certainly have to use the same methods in the controls.I would be very suprised if they did not. I dont think that would have survived the science peer review process.Their methods are what you expect if they were looking for a virus mostly in its latent phase.The Methodology in the IC and Groom studies were juvenile at best.The control methodology in the Groom study was totally bizarre.The detection methodology in the IC study went against all the published protocols for recovering XMRV partcularily in the transfection stage

Gerwyn, As I just said above, this is only true if the methods were all stated, if some part of a method is not stated in the paper no review will catch any related problems. A peer review is restricted to the paper itself and is not a review of a research program.

I completely agree with the importance of treating patient and control samples equally and with the advantages of blinded testing. In general this factor would be less than 16 given that samples from the same person would test positive more than once and that such repeated tests would not be independent of each other.

Say 8% of control volunteers have the virus and the test detects the virus 50% of the time. Then the first time I test I get 4% positives. This leaves 4% undetected infected volunteers. The second time I test half of these, that is 2% overall, turn out positive. If we do this 16 times we end up with F=4%+2%+1%+0.5%+ ....= something very close to 8% but less than 8%.
The final 8% may be further reduced if tests for the same person are correlated, as having a first false negative increases the chances of a second one.

So if the test is very insensitive the dominant factor is the number of repetitions. If the test is reasonably sensitive, say in the order 20-30%, the dominant factor is going to be what the real infection percentage is, so that with 16 repetitions you get very close to the real number.

BTW, I think it would be interesting to have a category for healthy people in the test polls to get a feel for sensitivity and specificity when results become available.

Raul, Some interesting comments. I believe your analysis is right if the false negatives are only due to poor test sensitivity or ultra low viral levels. However if false negatives are due to reagent issues then the risk is additive. And I don't believe that is known right now, the cause of false negatives, although WPI has made the assumption that only low viral count is involved. That has to be proven with validation studies which so far have failed.

Indeed, if the testing applied to the CFS patients were different to, and 16 times more powerful than, that applied to the controls, then this would explain the WPI results, leaving only the mystery of why other researchers round the world are unable to detect any trace of this near-ubiquitous retrovirus.

But do you not think that either one of the WPI team, Dr Coffin, the Science review panel, or somebody at some point would have noticed this flaw?

Is it not frankly inconceivable that the WPI results were based on research in which they looked 16 times as hard for XMRV in CFS patients as they did for controls? Can we really imagine that the people involved in this study could fail to notice this point?

Alternatively, perhaps we should imagine that the WPI did realise that their whole approach was fundamentally dishonest, but they managed to swindle a bunch of scientists into failing to notice that their healthy controls were tested in a completely different way to the CFS patients?

....

But frankly, I do still find these sorts of theories about the WPI study to be offensive and insulting to the WPI researchers. To include a methodological flaw of that magnitude would require either the utmost incompetence or the most profound dishonesty - and we would rightly tear them to shreds if it turned out the whole thing was no more than a giant swindle.

Mark, I generally would not consider this type of flaw to be realistic in a study as well organized and carefully reviewed as the Science study. That is why I did a real double-take when Mikovitz revealed that there were extra testing steps not included in the study writeup. Note that she revealed this in the context of explaining why ALL of the UK study results failed, therefore she considers this multiple running of test to be critical in finding XMRV. After hearing that I started working through the implications of some of her statements for the WPI study and realized it just was not clear which samples had been tested multiple times. If only the failed PCR samples were tested multiple times then that probably settles my question about re-testing. Until we have a definite response to that this issue remains, and it is a potentially serious problem.

Also, questioning basic assumptions and methods is not an insult in science. Sometimes rather large errors are made unintentionally in the rush to write-up and publish studies, and when that happens they must be revealed as early in the process as possible. And an error is not automatically fraud or swindle, please do not attribute that type of accusation to my comments, I am saying nothing of the sort. I respect what WPI is trying to accomplish and just want some technical questions answered in a clear language that we can all understand, that is all. Considering that the WPI study has NOT been validated yet, and also considering how critical WPI has been of the UK studies, and others here on this forum have also been, don't you think it is only fair to hold WPI to the same standard of scrutiny as any other XMRV study?
 

starryeyes

Senior Member
Messages
1,558
Location
Bay Area, California
Dr. Judy Mikovits gave me permission to post this:

tee: May I please post your answer on the Phoenix Rising forum?

Dr. Judy: Yes you may. Thank you for your support

I asked:

"If WPI had to take up to four blood samples to find XMRV in their CFS group, did they also look that hard for XMRV in the controls? Did they also take four blood samples from each control?" - KFG

Also, when you looked harder to find XMRV in the CFS patients after your initial 68% positive finding, did you retest the Healthy Control cohort the same way?

Dr. Judy's answer (bolding mine):


"The issue is DNA from unstimulated whole blood and PBMC...If the methods used in the science paper are used as they are done at the WPI and VIPDx then you will detect XMRV infection ALL the time. We have always tested healthy controls side by side as is indicated in every figure of the Science paper. The issue with all other studies to date is that they only do PCR on banked DNA and then are not using enough cellular DNA. No one has even attempted to go beyond figure 1 of our paper!

To clarify..if we looked at banked DNA from unstiumlated white blood cells, we would often have to look at multiple samples from different dates from the same patient..XMRV is simply not incorporated into the DNA at high enough copy numbers to detect unless the cells are dividing, allowing the virus to multiply..that is retrovirology 101.."
 

julius

Watchoo lookin' at?
Messages
785
Location
Canada
Too bad she didn't really answer the second question. That's the one I really want to know.

But good to know that the first is answered.
 
G

Gerwyn

Guest
Mithriel, if this is how WPI worked, and the multiple tests were only for those who initially were negative by PCR then I agree, there was no requirement to test the controls in detail. However this is the exact point that is not clear to me. Where did Mikovitz state that multiple tests were only conducted on those negative by initial PCR? Her statements seem unclear.

Your statement that the paper was examined for several months is correct, however, a paper is not analogous with a study, papers can pass review even when the study has an underlying flaw. The reviewers do not go to the lab and witness how a study is conducted, they rely on what the authors write in the report. If something important is left out of the report, the reviewers may not catch it. If multiple tests were run on some samples and not reported, even as a simple oversight and not with the intention of scientific fraud, that would not be caught in reviews.

As for antipathy to WPI, I have no dislike of WPI, challenging someone's research is an important part of the scientific process and is not personal. In fact, you are dubious of the IC study, is that antipathy or simply scientific objectivity? I am just trying to be fair and make certain we hold WPI to the same standard of scrutiny as every other XMRV study, and that is not antipathy. Some statements have been made that do not add up, I am trying to sort that out. There is no malice in scientific questioning, research must be validated through many means including questioning methods.



Gerwyn, As I just said above, this is only true if the methods were all stated, if some part of a method is not stated in the paper no review will catch any related problems. A peer review is restricted to the paper itself and is not a review of a research program.



Raul, Some interesting comments. I believe your analysis is right if the false negatives are only due to poor test sensitivity or ultra low viral levels. However if false negatives are due to reagent issues then the risk is additive. And I don't believe that is known right now, the cause of false negatives, although WPI has made the assumption that only low viral count is involved. That has to be proven with validation studies which so far have failed.



Mark, I generally would not consider this type of flaw to be realistic in a study as well organized and carefully reviewed as the Science study. That is why I did a real double-take when Mikovitz revealed that there were extra testing steps not included in the study writeup. Note that she revealed this in the context of explaining why ALL of the UK study results failed, therefore she considers this multiple running of test to be critical in finding XMRV. After hearing that I started working through the implications of some of her statements for the WPI study and realized it just was not clear which samples had been tested multiple times. If only the failed PCR samples were tested multiple times then that probably settles my question about re-testing. Until we have a definite response to that this issue remains, and it is a potentially serious problem.

Also, questioning basic assumptions and methods is not an insult in science. Sometimes rather large errors are made unintentionally in the rush to write-up and publish studies, and when that happens they must be revealed as early in the process as possible. And an error is not automatically fraud or swindle, please do not attribute that type of accusation to my comments, I am saying nothing of the sort. I respect what WPI is trying to accomplish and just want some technical questions answered in a clear language that we can all understand, that is all. Considering that the WPI study has NOT been validated yet, and also considering how critical WPI has been of the UK studies, and others here on this forum have also been, don't you think it is only fair to hold WPI to the same standard of scrutiny as any other XMRV study?

kurt a peer review does look at methodology if there is a hole or ambiguity a peer review worth its salt will pick it out.It cant be done in 24 hours of course.Science reviewers repeatedly asked for more info untill they were satisfied.I
fail to see why you are pusuing this avenue at the expense of commenting on the far more obvious and published flaws in the two british studies.Any deviations from protocols in the WPI approach are hypothetical the deviations from known established science in the British studies are absolute fact
 

kurt

Senior Member
Messages
1,186
Location
USA
Dr. Judy Mikovits gave me permission to post this:
tee: May I please post your answer on the Phoenix Rising forum?
Dr. Judy: Yes you may. Thank you for your support

Dr. Judy's answer (bolding mine):


"The issue is DNA from unstimulated whole blood and PBMC...If the methods used in the science paper are used as they are done at the WPI and VIPDx then you will detect XMRV infection ALL the time. We have always tested healthy controls side by side as is indicated in every figure of the Science paper. The issue with all other studies to date is that they only do PCR on banked DNA and then are not using enough cellular DNA. No one has even attempted to go beyond figure 1 of our paper!

To clarify..if we looked at banked DNA from unstiumlated white blood cells, we would often have to look at multiple samples from different dates from the same patient..XMRV is simply not incorporated into the DNA at high enough copy numbers to detect unless the cells are dividing, allowing the virus to multiply..that is retrovirology 101.."

Thanks for posting this. Judy Mikovitz's comments do help make clear that a control was run along side every CFS sample test. She did not state this directly but I assume from her comment that if a CFS sample was run several times, so was that control. Anyway I hope that is what she meant.

But what if a different sample was used for a CFS patient, a second or third sample, did they have a different sample from that same control? Or did they just test a different control? That remains unanswered, and I do not know how relevant that point is, but it should be addressed.

And she says that if you use banked samples without stimulating DNA replication you may have to look many times (earlier I believe she suggested four samples, with up to four tests each). So live or recent samples should be easier to test, finding more live virus. So which were used for the 101 samples and the controls also in the Science study, banked or live samples, or a mixture? That also is not answered.

And she says that when they looked at banked samples without stimulating the WBCs, they had to run multiple samples from multiple dates from the same patient, due to low viral counts. I assume from this that the study reported in Science included only stimulated WBCs then and this was a single pass on those samples?

I still would like to know the details of ALL of the samples used in the Science PCR study, both CFS samples and controls. How many times were each run, which samples were stimulated and which were not, and were the SAME controls tested alongside any multiple tests of the CFS samples?
 

kurt

Senior Member
Messages
1,186
Location
USA
kurt a peer review does look at methodology if there is a hole or ambiguity a peer review worth its salt will pick it out.It cant be done in 24 hours of course.Science reviewers repeatedly asked for more info untill they were satisfied.I
fail to see why you are pusuing this avenue at the expense of commenting on the far more obvious and published flaws in the two british studies.Any deviations from protocols in the WPI approach are hypothetical the deviations from known established science in the British studies are absolute fact

Gerwyn, I am not speaking of a problem with the methodology, but rather a possible problem with the reporting. Reviewers can not evaluate information they are not given.

As for the British studies, there is not much to evaluate, they did not find anything using their methodology. There were no 'flaws' in their studies other than not knowing how hard XMRV was to find, which was not entirely their fault. There were apparently some rather important details in the Science article that were cryptically written and those labs did not appreciate. Anyway, regardless of the reasons for their failure to find XMRV, their conclusions were incorrect, statements that there was no XMRV in ME patients in the UK would require far more extensive epidemiological testing and use of a validated methodology. At this point in the science they should not be conducting epidemiological research as there is no standardized XMRV test yet. So they were simply running a validation attempt (regardless of what they called it, that is what it was, certainly not a replication study) and they tried to draw epidemiological conclusions from that. The validation attempt was an acceptable part of the scientific process in a new finding like this, you have to run various different tests that prove that what was found by WPI was what they said it was. But the epidemiological claims showed that there was some convolution in the purpose of running the study.

And neither can WPI with any credibility claim that XMRV is a factor in CFS at this point, when there are no validation studies that have confirmed their results, and no confirmed causal model. We are still quite far removed from any scientific consensus about XMRV in CFS, so the claims being made should be 'we found or did not find XMRV using this methodology' and the methodology should be made explicitly clear. We also should not be getting new information from WPI about how to find XMRV, that all should have been in the Science report.
 

Cort

Phoenix Rising Founder
I asked Dr. Mikovits in an email: My thinking was that you used culture techniques with the original Science cohort - and these enabled you to pick up the virus with a single pass or did it take multiple attempts with the Science cohort as well?

In the some patients we could detect it every time and by the single round QPCR shown in figure 1 but in a much larger % we needed several samples at different times (reflecting the current immune control of viral expression in that patient) and /or culture in order to detect the virus....

We could look for antibodies in controls but not repeated sampling but could look for antibody in matched sera; the % of antibody positive was 4% giving us some comfort...

She also said:

it certainly is not present in Maryland controls where repeated sampling of the same donors is possible.

Nobody knew that it took them three or four times to find the virus in the CFS patients until they stated that recently.

On the other hand, the fact that they were using culturing suggested that viral loads were very low - still making it perplexing to me why researchers wouldn't at least follow that part of the study.

She also said

...we don't lose much sleep worrying about replication, we are certain that once someone tries to do it as in Science a lot will find it.


My apologies about stating the Groom study was a UK study; I read somewhere that the MRV was akin to the NIH.
 

kurt

Senior Member
Messages
1,186
Location
USA
Thanks Cort for passing that along. And this explains why they are focused on the antibody studies right now, as they are a more reliable match to the stats in the Science paper. But her response confirms some of my worries, so they did run more tests on some of the CFS samples than on the controls. Yes, the 4% positive rate on the antibody tests is definitely supportive of their overall statistics, but this still will basically invalidate the PCR part of the finding, at least if I am understanding this issue correctly. And the antibody results, while compelling, are less specific, they can be due to cross-reactivity.

Mikovitz: ...we don't lose much sleep worrying about replication, we are certain that once someone tries to do it as in Science a lot will find it.

This shows a serious disregard for the fact that the lack of detail in the Science article may be partly responsible for the failure of validation studies. They SHOULD lose some sleep over this because valuable resources are being wasted. They may be somebody else's resources and not WPI's, but think about this, if a bunch of labs spend a lot of money chasing down XMRV and nothing works out for them, how likely are they to invest in the next hypothesis that comes around for CFS? A lack of attention by WPI to this serious replication/validation problem could hurt our cause in the long-run.
 

starryeyes

Senior Member
Messages
1,558
Location
Bay Area, California
She also said: "it certainly is not present in Maryland controls where repeated sampling of the same donors is possible." - thank you for sharing her email to you Cort. This shows that Dr. Judy is positive XMRV is not present in at least these Controls.

I get the feeling that no matter how open and transparent Dr. Judy is and no matter how many questions she answers, some of you here will never believe the WPI's Study is completely correct until replication is proven and that's okay, time will tell.
 
Messages
16
Raul, Some interesting comments. I believe your analysis is right if the false negatives are only due to poor test sensitivity or ultra low viral levels. However if false negatives are due to reagent issues then the risk is additive. And I don't believe that is known right now, the cause of false negatives, although WPI has made the assumption that only low viral count is involved. That has to be proven with validation studies which so far have failed.

Kurt, if by additive risk you mean that you can calculate the total detection rate by multiplying the detection rate of a single pass times the number of passes I think that is not the case regardless of the nature of false negatives. If it was, you would get total detection rates above 100% in some instances. Say the detection rate of a single pass is 50%. After 3 passes you would get 3x50%=150%
 
G

Gerwyn

Guest
Gerwyn, I am not speaking of a problem with the methodology, but rather a possible problem with the reporting. Reviewers can not evaluate information they are not given.

As for the British studies, there is not much to evaluate, they did not find anything using their methodology. There were no 'flaws' in their studies other than not knowing how hard XMRV was to find, which was not entirely their fault. There were apparently some rather important details in the Science article that were cryptically written and those labs did not appreciate. Anyway, regardless of the reasons for their failure to find XMRV, their conclusions were incorrect, statements that there was no XMRV in ME patients in the UK would require far more extensive epidemiological testing and use of a validated methodology. At this point in the science they should not be conducting epidemiological research as there is no standardized XMRV test yet. So they were simply running a validation attempt (regardless of what they called it, that is what it was, certainly not a replication study) and they tried to draw epidemiological conclusions from that. The validation attempt was an acceptable part of the scientific process in a new finding like this, you have to run various different tests that prove that what was found by WPI was what they said it was. But the epidemiological claims showed that there was some convolution in the purpose of running the study.

And neither can WPI with any credibility claim that XMRV is a factor in CFS at this point, when there are no validation studies that have confirmed their results, and no confirmed causal model. We are still quite far removed from any scientific consensus about XMRV in CFS, so the claims being made should be 'we found or did not find XMRV using this methodology' and the methodology should be made explicitly clear. We also should not be getting new information from WPI about how to find XMRV, that all should have been in the Science report.

Kurt the british methodology is full of flaws I,m suprised you cant see therm lack of amplification proceedures normally used to detect a latent virus but one glaring one diagnostic criterea an other laced controls harvesting way to soon using non permissive cells and this is just thesimple stuff peer review done in 24 hours etc looking in the wrong place despite published evidence to the contrary I could go on. Judy,s analysis is spot on from a microbiologists viewpoint.The british methods are frankly baffling considering what they were purportedly trying to achieve they made assumption after assumption without testing any of them
 

Sasha

Fine, thank you
Messages
17,863
Location
UK
My apologies about stating the Groom study was a UK study; I read somewhere that the MRV was akin to the NIH.

I think you're right that it's a UK study, Cort - the MRC (I think that's what you meant) is the Medical Research Council, a research body funded entirely by the UK government.

ETA: Oh, just seen ME Agenda's post that you were responding to - I think the issue was not about it being a UK study but a UK govt funded study. It got its funding partly from govt (the MRC) and partly from charities, two (the Wellcome & the Cunningham) being general medical charities and the other an ME charity.
 

oerganix

Senior Member
Messages
611
KFG said: "We were told that as a result of re-examining their cohort over Christmas, they had discovered that it actually included patients from all over the world, contrary to what was published in "Science". "

You called that an error. What I heard her say in the first public presentation was that when the cohort was unblinded (not the same as 're-examined') in December she was surprised to find patients from UK, Ireland, Germany and Australia. What they said in Science was that patients were from US clinics. What she didn't know at that time was that those US clinics had treated some patients from other countries. This seems like something that, maybe, Dr Peterson should have caught before publication, but maybe those patients were not from his clinic either. Not an error, just something she found out after the study was complete, for good reasons. And what difference does it make, anyway? We've since had 2 groups of UK patients tested by WPI methods and about 50% have tested positive, and the testing methods are still being refined. As stated in another thread, a European study will be published in March and they are finding XMRV in CFS patients. Even as a non-scientist, I found the idea that there is 0 XMRV in UK to be ludicrous.

Regarding other points made about the WPI study by others, the fact that neither UK study seems to have asked for any help or collaboration from WPI, although WPI has repeatedly offered it, seems to me to be a contributing factor and something those other studies could have done if they were as serious about actually validating the original study. Their behavior doesn't seem very cooperative to me. They seemed to have started from a Reeves/Wessely point of bias: "nope, no XMRV here", before they did any research. So, IF any money was wasted, it is not WPI's fault that the other studies did not use all the resources available to them.

If WPI started from a bias that there must be a retrovirus or virus in these patients, I think that's justified by the clinical finding of doctors who have been treating patients for decades. Dr Bell said of XMRV: It just fits. Of course, we're all afraid of finding out this is not THE virus or retrovirus, but since WPI was the first group in a long time that was actually looking, many of us want to give them the benefit of the doubt, at least for now. Which is not to say that Kurt and others are not justified in examining WPIs work in detail. Obviously, good people and good scientists can have different opinions of the research and different impressions of the people doing it.

KFG, I don't think anyone here is going to accuse you of asking stupid questions, so fire away.