After the paper was presented, they did more testing on the patients that were negative. It was at this stage that the four samples came into it. Some patients were negative until they had been tested four separate times. One patient did not test positive by any of this extra testing all the rest had the virus. It may be that she meant they found patients positive by culture but they had to run PCR on four separate samples until PCR matched culture.
They did not have to test the controls in such detail. The difference between patients and controls was established by the original tests and the studies by the blood people and the CDC are looking at healthy people.
This paper was examined and refereed for months. The science was done properly. The Imperial College study was very rushed and I am dubious about it, in part because of the unscientific claims that were made, but the Kerr study did proper science but the design was not ideal.
I can't understand the antipathy to the WPI when everything they did was held up to so much scrutiny but so many well respected scientists.
Mithriel
Mithriel, if this is how WPI worked, and the multiple tests were only for those who initially were negative by PCR then I agree, there was no requirement to test the controls in detail. However this is the exact point that is not clear to me. Where did Mikovitz state that multiple tests were only conducted on those negative by initial PCR? Her statements seem unclear.
Your statement that the paper was examined for several months is correct, however, a paper is not analogous with a study, papers can pass review even when the study has an underlying flaw. The reviewers do not go to the lab and witness how a study is conducted, they rely on what the authors write in the report. If something important is left out of the report, the reviewers may not catch it. If multiple tests were run on some samples and not reported, even as a simple oversight and not with the intention of scientific fraud, that would not be caught in reviews.
As for antipathy to WPI, I have no dislike of WPI, challenging someone's research is an important part of the scientific process and is not personal. In fact, you are dubious of the IC study, is that antipathy or simply scientific objectivity? I am just trying to be fair and make certain we hold WPI to the same standard of scrutiny as every other XMRV study, and that is not antipathy. Some statements have been made that do not add up, I am trying to sort that out. There is no malice in scientific questioning, research must be validated through many means including questioning methods.
They would certainly have to use the same methods in the controls.I would be very suprised if they did not. I dont think that would have survived the science peer review process.Their methods are what you expect if they were looking for a virus mostly in its latent phase.The Methodology in the IC and Groom studies were juvenile at best.The control methodology in the Groom study was totally bizarre.The detection methodology in the IC study went against all the published protocols for recovering XMRV partcularily in the transfection stage
Gerwyn, As I just said above, this is only true if the methods were all stated, if some part of a method is not stated in the paper no review will catch any related problems. A peer review is restricted to the paper itself and is not a review of a research program.
I completely agree with the importance of treating patient and control samples equally and with the advantages of blinded testing. In general this factor would be less than 16 given that samples from the same person would test positive more than once and that such repeated tests would not be independent of each other.
Say 8% of control volunteers have the virus and the test detects the virus 50% of the time. Then the first time I test I get 4% positives. This leaves 4% undetected infected volunteers. The second time I test half of these, that is 2% overall, turn out positive. If we do this 16 times we end up with F=4%+2%+1%+0.5%+ ....= something very close to 8% but less than 8%.
The final 8% may be further reduced if tests for the same person are correlated, as having a first false negative increases the chances of a second one.
So if the test is very insensitive the dominant factor is the number of repetitions. If the test is reasonably sensitive, say in the order 20-30%, the dominant factor is going to be what the real infection percentage is, so that with 16 repetitions you get very close to the real number.
BTW, I think it would be interesting to have a category for healthy people in the test polls to get a feel for sensitivity and specificity when results become available.
Raul, Some interesting comments. I believe your analysis is right if the false negatives are only due to poor test sensitivity or ultra low viral levels. However if false negatives are due to reagent issues then the risk is additive. And I don't believe that is known right now, the cause of false negatives, although WPI has made the assumption that only low viral count is involved. That has to be proven with validation studies which so far have failed.
Indeed, if the testing applied to the CFS patients were different to, and 16 times more powerful than, that applied to the controls, then this would explain the WPI results, leaving only the mystery of why other researchers round the world are unable to detect any trace of this near-ubiquitous retrovirus.
But do you not think that either one of the WPI team, Dr Coffin, the Science review panel, or somebody at some point would have noticed this flaw?
Is it not frankly inconceivable that the WPI results were based on research in which they looked 16 times as hard for XMRV in CFS patients as they did for controls? Can we really imagine that the people involved in this study could fail to notice this point?
Alternatively, perhaps we should imagine that the WPI did realise that their whole approach was fundamentally dishonest, but they managed to swindle a bunch of scientists into failing to notice that their healthy controls were tested in a completely different way to the CFS patients?
....
But frankly, I do still find these sorts of theories about the WPI study to be offensive and insulting to the WPI researchers. To include a methodological flaw of that magnitude would require either the utmost incompetence or the most profound dishonesty - and we would rightly tear them to shreds if it turned out the whole thing was no more than a giant swindle.
Mark, I generally would not consider this type of flaw to be realistic in a study as well organized and carefully reviewed as the Science study. That is why I did a real double-take when Mikovitz revealed that there were extra testing steps not included in the study writeup. Note that she revealed this in the context of explaining why ALL of the UK study results failed, therefore she considers this multiple running of test to be critical in finding XMRV. After hearing that I started working through the implications of some of her statements for the WPI study and realized it just was not clear which samples had been tested multiple times. If only the failed PCR samples were tested multiple times then that probably settles my question about re-testing. Until we have a definite response to that this issue remains, and it is a potentially serious problem.
Also, questioning basic assumptions and methods is not an insult in science. Sometimes rather large errors are made unintentionally in the rush to write-up and publish studies, and when that happens they must be revealed as early in the process as possible. And an error is not automatically fraud or swindle, please do not attribute that type of accusation to my comments, I am saying nothing of the sort. I respect what WPI is trying to accomplish and just want some technical questions answered in a clear language that we can all understand, that is all. Considering that the WPI study has NOT been validated yet, and also considering how critical WPI has been of the UK studies, and others here on this forum have also been, don't you think it is only fair to hold WPI to the same standard of scrutiny as any other XMRV study?