It's illogical to refer to the macaque study because the initital positive papers were able to find XMRV in the blood years after initital 'infection'. It's implausible to assume that the initital positive papers were able to find it in human blood, but subsequent attempts of the same authors weren't because all of a sudden it dissappeared. This would be possible if patients were just exposed to this virus in the initital papers, but they weren't, many already had ME/CFS for years. Lo and Alter tested the same samples more than a decade later and they were still able to find PMLVs. I think Mikovits stated the same for XMRV.
In no way am I assuming that the initial positive papers found it in human blood but then it disappeared. That would obviously be a ridiculous argument and not supported by the evidence. My point about the macaque study is that even if XMRV was not present in the blood, and the original detection was due to contamination, failing to find it in blood does not mean it can't be present in the body.
But even if we follow this hypothetical argument it's still impossible that XMRV is human pathogen.
Again with that word "impossible". This is what I'm objecting to. It is "impossible that XMRV is human pathogen". Why then does one of the three new papers state this conclusion:
Overall, the
replication-competent retrovirus XMRV, present in a high number of laboratories, is
able to infect human lymphoid tissue and produce infectious viruses, even though they were unable to establish a new infection in fresh tonsillar tissue. Hereby, laboratories working with cell lines producing XMRV should have knowledge and understanding of the
potential biological biohazardous risks of this virus.
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0037415
Why do researchers still worry about "potential biological biohazardous risk" if it's "impossible that XMRV is [a] human pathogen?"
It maybe difficult to conclude this from ME/CFS papers because they haven't studied many different tissues (although Natelson did a XMRV spinal fluid study and didn't find anything). However, in PC studies researchers can't find XMRV in prostate tissue anymore. According to your macaque argument they should find it because prostate tissue is the only place XMRV can be present to cause PC. The only possible explanation left is contamination. Very simple.
Even if it
isn't present in prostate tissue and even if it isn't found in spinal fluid in ME/CFS patients, it is still not
impossible for XMRV to be pathogenic! Even if contamination takes place in experiments - which I think is a given - that does not make it
impossible for the contaminant to be pathogenic.
Wait a second...... I'm afraid we're arguing two different things. I'm trying to make the point that XMRV is not a human pathogen, simply because it's not circulating in humans. I believe you're trying to say that it still may be pathogenic, even if it's not circulating in humans (ie. it's able to infect human cells and may eventually spread from the laboratory to the human population). Please correct me if I'm misunderstanding it.
You're stating that XMRV
is not a human pathogen, and concluding from the failure to detect it in human blood or prostate tissue that this means it is
definitely not circulating in humans. I'm saying that neither is certain or proven. Just because you can't find your keys in your pocket, on your table, or under your sofa, does
not mean that you can claim to have proven they are not in your house and they don't unlock doors anyway.
"maybe the patient samples were handled more often"
I believe there is sufficient evidence to support that argument (bolding is mine). Mikovits wasn't able to distinguish controls from patients under real blinded conditions (BWG study). She used her own lab and the same techniques as she did in the originial Science paper, so the ONLY explanation is that, indeed, the samples were handled differently.
I have seen no evidence that the patient samples were handled more often than the controls in Lombardi et al, nor have I seen any evidence that the prostate cancer cells were handled more often than the controls in Lo/Alter's study or in the prostate cancer studies. In order for this explanation to hold up, the patient samples must have been handled more often in all of these different studies, by many different researchers. Even if this were true, it would raise the question: "why would a good scientist do this, and how can we get scientists to stop making such basic errors?"
Claiming "that is the only explanation" and that this constitutes evidence is completely flawed reasoning, and that sort of argument trips us up all the time when we encounter possibilities that we had not thought of. "That is the only possible explanation" requires you to have constructed a logical exploration of every single possibility, and you have to prove that every single other possible explanation does not fit the facts. Yes, it's true that "once you have eliminated the impossible, what remains, however improbable, must be the truth" - but this argument rests on the assumption that you have considered every single possibility, however improbable.
There are other possible explanations for the facts you cite, however improbable. I am not claiming the following examples as plausible or saying I believe they are true, but they are illustrative of the logical fallacy of claiming something
must be true because it's the only explanation you can think of.
Here are some alternative top-of-the-head scenarios in which the original results were valid, that fit the known facts. The BWG samples provided could have been flawed or mixed up in some way. The blinding/coding could have gone wrong - and why is that so much less plausible than the basic error of handling samples differently? There could have been cross-contamination between patient and control samples during the process of transferring them to the WPI. There could be some factor in the reagents in the tubes that hid or 'deactivated' any XMRV present, and thus the only positive signals the WPI could find was then from contamination. The WPI, under time pressure, might not have had time to carry out their work to the same standards as the original experiment. Some essential agent in their detection process may have degraded or been tampered with in the time between the two studies. There could be some factor essential to successful detection which they performed in the initial experiment which the WPI weren't aware of. The samples could have been tampered with en route and the labelling mixed up. All of these possibilities are improbable, and the last one wanders into the realm of conspiracy theory - but none of these scenarios are impossible.
And even assuming that the original experiments by Lombardi et al, Lo/Alter and the prostate cancer researchers
were all fundamentally flawed in some way, that still does not mean that the flaw was "more frequent handling of the patient samples". Anomalous results like these have been reported for decades and consigned to the dustbin with a guess of "probably handled more often", but that is no more than a hypothesis, not even a theory, without evidence that this was the case. Maybe the patient samples had
different handling - through the reagents in the tubes, the needles used to extract the blood, the length of time they were stored or transported, or some other obscure factor that none of the researchers thought to control for. Another theory that has been advanced for such anomalous results can be dismissed - the idea that researchers conduct many such trials and only publish the successful ones - because the p-values in these cases are way too high to justify that explanation (although this is a genuine major issue and illustrates why all experiments should be required to register before they are conducted, and publish even if their results are unsuccessful). Or maybe these experiments were all fraudulent or self-deceiving; maybe somebody snuck in one night and put XMRV in the patient samples; maybe they all just lied...such things do happen.
My aim here is not to pick one of the hypothesis from the above two lists; I am fairly confident that the true explanation is one that I have not thought of. I'm merely pointing out that it isn't satisfactory to say there is sufficient evidence to support the hypothesis that one set of samples were handled more often than the other, when in fact there is no such evidence. The most your argument could support is the case that there was
something wrong with the methodology of all these studies - and many others before them - and I'm saying that if this sort of thing keeps going wrong then it would be more than worthwhile to try to figure out exactly what this common methodological error is so that it can be avoided in the future.
You can't just keep throwing anomalous scientific results in the bin marked "I don't know, something must have gone wrong, maybe we just imagined it" on the basis of arguments based on exclusion...or they'll end up like CFS patients...