• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

New Paper - Gammaretroviruses - Maureen Hanson, David Bell

Bob

Senior Member
Messages
16,455
Location
England (south coast)
An update - I looked at XMRV studies since Feb....

Three studies demonstrating how susceptible laboratory cultures are to XMRV contamination. Another finding demonstrating that the XMRV found in the WPI's studies came the mice used to culture the prostate cancer cell line and couple more studies unable to find XMRV in prostate cancer patients and two more studies unable to XMRV in immunocompromised patients (who tend to pick up all sorts of pathogens)..

But Cort, what we've been discussing is that Hanson didn't find XMRV.
She doesn't know what she found, or where the sequences originate from, except that she says it isn't mouse contamination or 22rv1.
Her sequences are closer to Lo's P-MRVs than XMRV.
No one has yet been able to say where these sequences might be originating from.
Hanson doesn't say that her sequences are a result of contamination, in her paper.
I'm interested in what the source of the sequences is, as it doesn't appear that it is either XMRV 22rv1 from the cell line, or mouse DNA, or mouse ERVs.
 

Cort

Phoenix Rising Founder
No need to guess: I explicitly stated ME/CFS and prostate cancer in the quote you cited.


In ME/CFS patients, I think the following found more MLVs in patients than controls:
- Lombardi et al
- Lo/Alter
- Hanson's latest study
- A previous small-scale study (also by Hanson, I think?)
- Singh's original results (later found to be contamination, but originally higher levels in patients than controls I think)


In other studies:

- A German study into immune-compromised patients
- Silverman's original prostate cancer study
- some other prostate cancer studies (not sure how many, I think there were at least 3 PC studies in total)

The entire Lombardi paper has been retracted by the editors of Science, Lo/Alter have refuted their findings and as I remember they were unable to find XMRV in the Blood Safety Research study. Singh was higher - now she's stated its contamination. Why are these refuted studies being given weight? I guess you're suggesting that XMRV/MLV's actually were there and when they rechecked (using better tests by the way) it disappeared....ie they were right first and then wrong later?

For me the most pertinent fact is that matched up against those few studies that did are dozens of other studies, some from top labs which have used better tests to find the virus, and have been unable to do so. I just don't see any reason to hope that XMRV/MLV's are out there and infecting humans in particular CFS patients. Too many good researchers have looked too many times for them for there to be any chance, in my opinion, that they have been missed.

So we have one example where it was the other way round. That brings down the odds of this happening by chance to the same as flipping a coin 10 times and once it's tails. Still unlikely...I won't calculate it right now...

Put that odds argument to the test regarding the studies on XMRV --- 40 plus studies that have been unable to find it vs I study that was able to find it (Lombardi) and then two studies that were able to find MLV's...What are the statistical probability that those studies are wrong?

I know you find that compelling, but it's certainly not relevant to the point I'm making that lots and lots of studies found nothing at all. Singh did an excellent job in her paper of listing all the problems with the methodology of those negative studies:
http://forums.phoenixrising.me/index.php?threads/cleaning-up-after-xmrv.17517/page-2#post-267193

Specifically, she re-iterated comments made by members here: in particular, they generally only searched using PCR assays that rely on conservation of viral sequence, the limits of detection, reproducibility and precision of their assays were unknown, they didn't include enough negative controls, and none of them included positive samples from Lombardi et al. Most of them were looking specifically for the VP62 contaminant XMRV, so most of those results don't speak to the wider question of MLVs at a

Yes, and then she went to great lengths to ensure that those problems were dealt with and others did as well. Those arguments were valid early in the search but not later. Several researchers noted that each study would have to stand on the shoulders of the earlier and be better and they were; the later studies were more sophisticated and comprehensive...That includes the double-blinded Blood safety study - which Dr. Mikovits was a part of.

But regardless of that, no number of negative studies affects my point that of those studies that found a statistically significant difference, nearly all of them found more in patients than controls.

You may be right...The list of studies is not particularly long and I suspect if I looked I could come up studies suggested that the ratio is closer than you believe. It would take some digging...
 

Cort

Phoenix Rising Founder
This part of the Hanson paper is interesting to me....

Whether there are unknown retroviruses that are inciting factors in CFS/ME remains unknown. The PCR primers that we and others have employed for screening for XMRV and MLV-like sequences will allow detection of only a subset of viruses related to MLV. These PCR assays would not have amplified sequences from common feline leukemia viruses or gibbon ape leukemia viruses, even though they also are in the gammaretrovirus family. In fact, Elfaitouri et al[29] have pointed out that most of the primer sets that have been used to study CFS/ME samples would not even detect all groups of MLVs.

It appears that there are more possibilities for MLV's although Elfaitouri study - which looked for them nixed that idea to a large extent. I guess that the Lipkin and WPI and BSRI studies will tell us more about that...:)
 

currer

Senior Member
Messages
1,409
We need to know whether there are murine retroviruses causing disease in humans.
If there are this will be a third human retrovirus and will revolutionise the field. I do not think the CFS studies are more important than the prostate cancer, primary biliary cirrhosis, leukaemia, or breast cancer studies - the question is - who (if anybody) is going to be first in establishing that MRVs can cause disease in humans.

The CFS/ME studies have become caught up in a much wider debate. The abitrary focus on the one XMRV sequence is a deliberate ploy to delay any research which might prove such a connection.

Although I have always supported the MRV work, I think that there could be other causes to ME, for example I think the immune dysfunction could equally well be caused by vaccine technology, and the effect of adjuvants on the immune system.

Incidentally Singh changed assays before commencing her CFS study and has not retracted her prostate cancer paper. MRV infection of humans stands or falls equally with prostate cancer.

We need much more research to look at all these questions. However, today I have just been told that my nephew, age four, has leukaemia ( his father, my brother, has parkinsons) My sister and I have ME. This makes me think that the disease associations that Judy Mikovits pulled together as being associated with MLVs could well be right!

If MRVs are causing human disease they are presenting differently as the generations succeed each other. And causing disease earlier perhaps. This is much bigger (potentially) than just ME.

Focusing narrowly on only one aspect and ignoring the multiplicity of ways such a pathogen COULD present, will only reconfirm known beliefs, when we need the courage to make a paradigm shift This is what Judy Mikovits is being punished for and why some posters here prefer to stick to what is safe and conventionally known. But science is about constantly challenging the boundaries of the known.
 

jace

Off the fence
Messages
856
Location
England
I'm sorry to hear how so many of your family are affected by illnesses where the causation is unknown, Currer. My family too, has many members with such illnesses. The toll is breast cancer (two nieces and an aunt) ME (me, and now it seems, my youngest child) Parkinsons (an aunt) testicular cancer (a cousin) and leukaemia (another cousin who died aged six). That's the ones I know about.

What concerns me is the continual "the debate is over" "xmrv is dead" sort of statements, so eloquently described by Bob on another thread.
 

natasa778

Senior Member
Messages
1,774
What concerns me is the continual "the debate is over" "xmrv is dead" sort of statements, so eloquently described by Bob on another thread.

I second that. It is very hard to not wonder about not-so-pure motivation of those who continually try to proclaim "no novel retroviruses in humans" and "we now know everything there is to know". Especially motivations of those who come to patients forums solely to make such proclamations.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
The previous "small-scale study" is in fact the very same study as this published Hanson study. Although the original sample size was originally deemed to be too small for publication (and the study was therefore initially regarded as some sort of pilot study), instead of doing a new, larger study, a couple of samples were added to the existing batch.
Thanks for clarifying that. This wasn't clear to me from reading the paper, but it sounds right and it does reduce my count of positive ME/CFS studies by one.



And you thought wrong about Singh. She found "it" at the same levels in controls, as she did in patients. The relevant quote from the Singh paper: "...we found approximately 5% of our samples to be positive for products of the expected size, regardless of whether they were patients or healthy volunteers".

Singh was originally hopeful because she had found some positives and had not yet unblinded her results, which could explain why you would think this. However, in no way has Singh ever reported that she found the virus at a higher rate in ME/CFS patients.

That sounds right too; either confusion with the PC study or the original hopeful findings before unblinding may be why I recollected more positives in patients than in controls initially. Assuming that's right, that reduces my count of positive ME/CFS studies by one more.


Mark said:
That was months ago, the study was only just published: the conclusion does not say that the results were contamination and makes it clear that they consider the question still open



Again, it is the very same study. It's how these things work - you present these things in conferences before you publish and you explain what you think happened, in a less formal way as you would phrase it in a paper.

Although you (or anyone else) is certainly free to give your own interpretation of this paper, my guess is that it really adds nothing to the idea that these viruses are "out there", not in the least because this was already known data and, although previously unpublished, had already been taken into account by most people in the field. Lo et al. even mentions this study/data in their retraction notice.


They were talking about the same study, but that was something like 9 months ago. They've only just published the paper. If you're right that they didn't mean what they said in the publication, and they speak informally at conferences and elsewhere in private saying what they really think, and what you say they said 9 months ago is a more reliable guide than what they published this week - and if you're right that this is "how these things work" - then I think you need look no further for an explanation of the confusion, disbelief, and even the conspiracy theories of patients and other lay-people when they read and interpret published results and are then told that the scientists have been chatting about it all behind closed doors and are actually saying something completely different.

Patients who go by the information available to them - the published data - will read Hanson's conclusions, and not much interpretation is required:

"Whether there are unknown retroviruses that are inciting factors in CFS/ME remains unknown."

"Less specific methods such as virus microarrays or high-throughput DNA sequencing are more suitable for detection of unknown agents that may be associated with disease states. Their application should be fruitful in identification of pathogens that may more frequently infect CFS/ME patients, either as a cause or consequence of the illness, and will be instrumental in verifying whether or not gammaretrovirus infections exist in humans and/or whether or not an unknown viral infection is associated with CFS/ME."

I interpret this as saying that the researchers consider that the question of whether gammaretrovirus infections exist in humans, and whether they or other viral infection may be associated with ME/CFS, remains unknown.

Combining this conclusion with a reading of all they have to say about the steps they took to try to detect contamination in their positive results, and the fact that they were unable to explain how any such contamination could have occurred (especially the lack of any correlation with factors like the dates when the separate batches arrived in the lab), I interpret that they are not concluding that their positive results were definitely contamination. Sensibly, in the absence of any evidence to confirm that hypothesis, they regard that question is unresolved and leave open the possibility that they may have detected something significant in those positive findings.


And while not exactly the same, in this context the (early) Groom et al. study is also worth mentioning. Using a non-PCR (neutralisation assay), they found 0/142 of patients and 22/157 (14%) of healthy blood donors to be positive for XMRV (or, due to cross-reactivity, another (retro)virus).
The Oakes study you mentioned, and the Groom study, are of course significant to my argument about the number of studies that found more MLVs in patients than in controls. I wasn't aware of those results, and that does even up the count considerably; together with Huber that makes 3 that found more in controls...I think I'm down to 7 the other way round, and there may perhaps be more that initially found more in controls...so of course I have to accept that's looking a bit more realistic now.

Purely statistically, the chances of a coin flipping like that would be 10/1024. Note that, technically, the "right" question to ask is what the chances are the coin flips at least 9 times heads, which would be 11/1024. Regardless, it amounts to around 1%.
Agreed, and as I'm sure you realise, once we get down to 7/10 with more positives than controls, the odds go up quite a bit, I make it 176/1024, which is 17% - not very significant. Unfortunately we can't ever determine this meta-analysis accurately now, because results like the Ashford study were never published, for unknown reasons. I've highlighted before the point, which I've seen scientists in other areas coming to terms with recently, that there's a need for a major reform of the publication system such that all studies undertaken must be registered before starting, and their results must be published whether they're positive or not. Without that, we're stuck with a situation where we can't accurately determine what the results are actually saying overall - most unsatisfactory.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
However, there are two reasons why this doesn't really apply:

1. The "samples getting handled more often" argument. You seem to dismiss it earlier without really arguing why (which is curious as it seems to give a satisfactory explanation to what is apparently your most serious problem with the results thus far).

My objection to this hypothesis is the way I have seen it casually mentioned without any supporting evidence, or with only a cursory examination of that evidence. Too often I've seen it expressed with a shrug, and it's just a hypothesis, without evidence that the patient samples really were handled more often than the controls. You've done rather better than that here though:


To give you the example of the Lo et al. study (which I chose because the timeline of sample collecting is pretty well documented) :

- Lo collected these samples in the mid-nineties for another study. They were handled for that study (tubes were opened, needles went into the samples, etc.). After the study, the samples went into the freezer for 15 years. Perhaps they were moved in and out of the freezer multiple times, perhaps they were stored next to mouse DNA, perhaps they were used for some other experiments - who knows what exactly happened?


- After reading the Lombardi et al. study, Lo remembers the ME/CFS samples being in his freezer. He calls up Harvey Alter and says: "Harvey, can you send me some fresh control samples to mix up with the other samples?"

I think it is entirely conceivable that Lo has(/had) some sort of trace contamination in his lab going on (either through reagents or other means) and that, therefore, the samples that were in his lab longer will be contaminated at a higher rate than samples that were present in his contaminated lab for a shorter period of time.


Agreed that it's conceivable that the patient samples may have been contaminated before freezing when the samples were used for that study. I seem to recall a mention when the study was published that the samples had not been unfrozen, moved, or used elsewhere, but that could be an inaccurate recollection - if relevant records covering these questions do not exist, then of course they should, so it should be possible to deal in fact here rather than speculation. My point applies also to your suggestions here: they are just speculation and hypothesis, together with a 'who knows?' shrug of the shoulders. Not that I object at all to speculation, obviously, it's just that whenever anybody here speculates in similar fashion, frequently we will be pulled up rather abruptly for it and told 'there's no evidence for that': the same is just as applicable to the argument about the handling of samples: it's a hypothesis, a speculation, and should be required to present evidence in support.


2. Confirmation bias.

The results of the BWG were indeed disastrous. Not only in phase III (the Simmons et all study), but (pilot) phase IIb was also pretty bad (although performed on a rather small sample size of 4 patients and one control).

Mikovits should have no problem whatsover to discriminate between patients and controls. Not only did she reliably do this in the original study,she also did this for the (unpublished but reported at conferences) UK study.Moreover, for the Lombardi et al 2010 study (the cytokine one) they sampled 118 patient samples for XMRV (and it is important to note that they were screened as ME/CFS patients beforehand, not as XMRV-positive) and they found all 118 of them (100%) to be positive for XMRV. Given the supposed problems of detecting XMRV even if it would be present in blood samples, this result is statistically realistically impossible to accomplish. Although samples were supposedly blinded in the original study, blinding does leave room for confirmation bias (for instance by disregarding some test results), not to mention that adding certain substances (e.g. 5-AZA) to your patient samples but not to control samples before you run a test, will not be "saved" by all the blinding in the world.

Also, in the Lo et al. study, the samples were defintely not blinded, leaving much more room for confirmation bias. This is painfully obvious from the 8 resampled patients, where 7 of them were found to be positive without the use of any controls (!). However, when blindly testing for the BWG (all of the 5 "Lo" BWG samples collected were from these group of 8 patients), Lo could not designate a single one of them as positive.

I don't recognise what you're saying here about the 100% result, but strictly speaking at least, what you say above isn't accurate: For the Lombardi et al study, they didn't find 100% on any of the tests. They subsequently said they'd increased from the initial 67%, but that was after publication and I don't think any such results were ever published, and I don't remember them ever claiming anything above about 96%. Do you have any references on that claim?

As regards the 5-AZA, I don't think the details on that question have ever been made at all clear, partly because Dr Mikovits was effectively silenced by the time this issue was raised and we have barely heard anything from her publicly since then, but from what little I did see, I think it's a misrepresentation to suggest that the patient and control samples were treated differently with 5-AZA in the original study in a way that could affect the percentages. My recollection is (very roughly) that it was something like a group of samples that had already been found positive that were treated with 5-AZA, to amplify the detail of the results for a particular image in the paper....but I really don't think there was clarification on these points because by this point the debate had degenerated to such a level that a fair discussion of the factual issues was no longer possible.

Overall, I think you make some good points that reduce the strength of my argument about the possible statistical significance more studies finding MLVs in patients than controls. A few more counter-examples would effectively demolish that argument. But I still remain convinced that it's important to continue to study this whole wider question of HGRVs, MLVs, and lab-created retroviruses that preferentially infect human cells and have spread widely as contamination for decades without anybody knowing about it. Is it O'Keefe that's just found another one? How many more of these things are there, floating around? What other cells and products have they infected? Have any of them ever infected a human, and might they be harmful if they did?

These do not seem to me like questions that should be swept under the carpet, and I'd feel a lot more reassured about the way this whole issue is being addressed if anybody had taken a wider view at some point during this saga, and conducted any type of study looking for general evidence of retroviral activity (reverse transcriptase levels, for example) in ME/CFS - if the focus of this hunt has been mainly on a search for a specific virus up until now, maybe it's time now to widen the search a bit...but sadly that seems unlikely to happen, because the interest throughout has always been in XMRV rather than in ME/CFS.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
The entire Lombardi paper has been retracted by the editors of Science, Lo/Alter have refuted their findings and as I remember they were unable to find XMRV in the Blood Safety Research study. Singh was higher - now she's stated its contamination. Why are these refuted studies being given weight? I guess you're suggesting that XMRV/MLV's actually were there and when they rechecked (using better tests by the way) it disappeared....ie they were right first and then wrong later?
Of course these retractions don't mean that the papers are now forgotten about. We could open up all the issues around the retractions - like why it's only the ME/CFS papers that have been retracted and still not the prostate cancer papers, which are equally affected by the issues - but neither of us have time to go into all that now. You're basically right that I'm saying that the decision to retract findings because many have decided they must have been caused by contamination has no bearing on my statistical argument about the greater likelihood of patient samples to be contaminated rather than control samples. Withdrawing those studies doesn't really change that argument, unless and until the case has been proven that there's an explanation - rather than a hypothesis - of why this difference keeps occurring.

I think the more important point is that O'Keefe has identified a previously unknown behaviour of PCR reactions whereby precisely what you just stated does indeed happen. After a PCR test has found positives, contamination from that test can affect later PCR tests and suppress the detection of any future positives for MLVs. So that, and other possible issues with PCR which are not yet known, are certainly important to investigate further.

For me the most pertinent fact is that matched up against those few studies that did are dozens of other studies, some from top labs which have used better tests to find the virus, and have been unable to do so. I just don't see any reason to hope that XMRV/MLV's are out there and infecting humans in particular CFS patients. Too many good researchers have looked too many times for them for there to be any chance, in my opinion, that they have been missed.
I have to take issue with your claim of "better tests to find the virus". That description needs thinking about carefully. When we're talking about a test for something new, it's not clear to me that the tests that don't find it at all can automatically claim to be "better tests" than the ones that do; that rests on assumptions that may not hold up, and if that assumption is allowed then it denies any possibility of ever discovering anything that doesn't obey those rules.

Put that odds argument to the test regarding the studies on XMRV --- 40 plus studies that have been unable to find it vs I study that was able to find it (Lombardi) and then two studies that were able to find MLV's...What are the statistical probability that those studies are wrong?
You omit the German study, the prostate cancer studies, and the unpublished studies, but in any case that's a completely different argument and it doesn't affect my argument about how the studies that do find it seem to find it more in patients than in controls. I admit that of course it's less likely to be a valid association than it would be if everybody else was finding it as well, but until the anomalous results are explained it remains possible that they are finding something real that isn't always found using standard techniques.

Hanson and O'Keefe's papers illustrate this possibility: Hanson also found it one time, and some tantalising trace evidence later on, but basically most of the kinds of test they tried found nothing, and they couldn't explain why they found something the first time. That's the overall picture from the data we're looking at, and it remains unexplained. And O'Keefe's paper suggests one possible explanation which might be an example of future discoveries regarding PCR: inhibition of PCR reactions by products of the initial positive results.

The widespread contamination of laboratory products, with accidentally-created retroviruses that preferentially infect and replicate in human cells, is a new finding that has been highlighted as a result of the investigation of Lombardi et al. Problems with the inhibition of PCR tests by previous positive results are another new discovery. The means by which contamination has been occurring in the positive studies remains unexplained, and similar contamination will presumably continue occurring unless a full explanation is found and methods are changed. So there is much that remains unexplained, there's much more of potentially enormous value to be discovered by following up on this science, there's no real basis for estimating how widespread the problem of lab-created retroviruses may be, and the possibility remains that gammaretroviruses, MLV-rs and lab-created retroviruses may have infected humans and may be harmful. All I'm really trying to say is that further investigation of all these questions should continue, and the book should not be closed after 2 years of study with so many questions still unanswered.
 

natasa778

Senior Member
Messages
1,774
Agreed, and as I'm sure you realise, once we get down to 7/10 with more positives than controls, the odds go up quite a bit, I make it 176/1024, which is 17% - not very significant. Unfortunately we can't ever determine this meta-analysis accurately now, because results like the Ashford study were never published, for unknown reasons. I've highlighted before the point, which I've seen scientists in other areas coming to terms with recently, that there's a need for a major reform of the publication system such that all studies undertaken must be registered before starting, and their results must be published whether they're positive or not. Without that, we're stuck with a situation where we can't accurately determine what the results are actually saying overall - most unsatisfactory.

Good point Mark, there is at least one more unpublished study that found positives in patients (NIH autism one). There could be a number of other (potentially) positive ones we don't know about, where authors 'decided' not to bother publishing, as clearly was the case for NIH study, and that O'Keefe mentions being out there...
 

RRM

Messages
94
Mark said:
I don't recognise what you're saying here about the 100% result, but strictly speaking at least, what you say above isn't accurate: For the Lombardi et al study, they didn't find 100% on any of the tests. They subsequently said they'd increased from the initial 67%, but that was after publication and I don't think any such results were ever published, and I don't remember them ever claiming anything above about 96%. Do you have any references on that claim?
Yes, I have references.

First, it's important to note that I was not talking about the 2009 Lombardi study, but about the 2011 Lombardi "Cytokine study" (I incorrectly said 2010 earlier)
http://iv.iiarjournals.org/content/25/3/307.full.pdf+html

In this study, samples from 118 XMRV positive patients were obtained, according to the authors. What's important to note, however, is that the authors didn't obtain these from a larger group of ME/CFS patients, some or most of which were postive for XMRV. No, they actually selected the 118 ME/CFS patients for these study before their XMRV findings, checked these 118 patients later, and found all of them to be positive for XMRV (!!). You can verify this from these May 2009 slides:

http://www.wpinstitute.org/news/docs/Invest_in_ME_20090529_Mikovits.pdf

See slide, 3 and slide 15-23. On a side note, I think it's pretty telling that in these slides Daniel Peterson was noted as a collaborator but was omitted from the final paper.

To me, this study, in combination with the slides, is extremely damaging to Mikovits's and Lombardi's image as objective, self-crititical scientists. There is just no way that, given all the problems with detecting XMRV in the scenario that it's present, you can find your whole pre-selected cohort of 118 ME/CFS patients to be positive for XMRV.

As regards the 5-AZA, I don't think the details on that question have ever been made at all clear, partly because Dr Mikovits was effectively silenced
Don't believe everything you hear. First of all, the 5-AZA problems have really nothing to do with the judicial proceedings against Mikovits. It's her own choice not to have properly explained this. Second, Frank Ruscetti could provide the community with a proper explanation.

I think it's a misrepresentation to suggest that the patient and control samples were treated differently with 5-AZA in the original study in a way that could affect the percentages


First, I diddn't (want to) suggest that the 5-AZA influenced the actual percentages.I thought it was pretty clear that the 67% was obtained through their PCR tests, and that the other tests were (more or less) follow-up tests to confirm these results. Nevertheless, these followup confirmation tests should be performed with the same scientific rigor as your initial tests.

My recollection is (very roughly) that it was something like a group of samples that had already been found positive that were treated with 5-AZA, to amplify the detail of the results for a particular image in the paper
Yes, but they didn't treat the controls with the same substance in this particular experiment.

It's like saying I have found a way to discriminate ME/CFS blood samples from healthy controls: when I add a bit of blue paint to the ME/CFS samples, they turn purple, while the (untreated) healthy control samples stay red.

No matter what the explanation, in a test like this, you handle every sample the same. You don't add a substance to just one group of samples and then run a (blinded) test. Your test becomes useless because of it.

My point applies also to your suggestions here: they are just speculation and hypothesis, together with a 'who knows?' shrug of the shoulders.
Bob said:
If you can explain exactly what the sequences are, and what the source is, then I'd be interested in your explanation.

These two remarks are perfectly understandable, but I think it is unrealistic to expect all questions like these to be answered.

In general, science is much more efficient at answering generable questions than at solving singular problems. Singular problems are like murder investigations: you have to depend on the available evidence and in many cases you cannot devise experiments (that can be repeated) to really test the hypothesis. That's not to say it's always a problem: many murder cases are very clear cut and so are many questions in science answered because of convincing evidence.

If I tomorrow announce that the Laws of Newton do not apply under a certain set of conditions and get published, what would you expect would happen? My results will get retested by others. When they are unable to reach my results, these other scientists will just move on. Wouldn't they be interested in what actually went wrong in my lab? Sure, but it isn't necessary to know what went wrong to conclude that I am wrong and, second, in many cases it is just not possible to find out what really happened. What if I don't co-operate and don't let anyone check my notebooks/experimental environment? What if I just made up my results without leaving any trail of evidence?

The same applies here. In all probability nobody (outside of the original investigators themselves) will ever be able find out how these fragments entered the Lombardi et al. and Lo et al. samples. I seriously wouldn't know how to accomplish this. If you can propose an experiment (or really anything) that could lead to finding out what really happened, I would be very interested in hearing it.
 

jace

Off the fence
Messages
856
Location
England
As others have said, there are many more researchers than Lo and Mikovits who have found gamma retroviruses of a poly or xenotropic nature in human tissues. Positive results have been obtained in Germany, Japan, and America. We have more information from O'Keefe and from Hanson recently. In no way can it be said that the fat lady has sung her song yet, nor should it. Those of us following this fascinating scientific story, with all it's twists and turns, are not ready to look away now, to move on. Sorry about that.

RRM said:
In this study, samples from 118 XMRV positive patients were obtained, according to the authors. What's important to note, however, is that the authors didn't obtain these from a larger group of ME/CFS patients, some or most of which were postive for XMRV. No, they actually selected the 118 ME/CFS patients for these study before their XMRV findings, checked these 118 patients later, and found all of them to be positive for XMRV (!!). You can verify this from these May 2009 slides:
I've looked at slide 3, and slides 15 - 23 on the link you gave, RRM, and I find no mention there of cohort selection. Indeed, there is no result in that .pdf when I search for XMRV. So I'm puzzled at your point - is it too early for me, or to late for you?

http://www.ncbi.nlm.nih.gov/pubmed/21576403
One hundred and eighteen specimens from patients who tested positive for XMRV and with a confirmed diagnosis of CFS at the time of collection were obtained from the Whittemore-Peterson Institutes’ sample repository. All specimens used in this study were heparinized plasma and represented a female to male ratio of approximately 2 to 1, consistent with previously reported CFS distributions.

The above is the paragraph on cohort selection, from Lombardi et al 2011 (the cytokine study). They are saying, that from their positive sample selection stored in their repository, they chose 118 samples from patients with a viral onset diagnosis of ME/CFS.


My recollection is (very roughly) that it was something like a group of samples that had already been found positive that were treated with 5-AZA, to amplify the detail of the results for a particular image in the paper

RRM:
Yes, but they didn't treat the controls with the same substance in this particular experiment.

Because they wanted to amplify the image of the positive result that they had already found without the 5-AZA, perhaps?
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Yes, I have references.

First, it's important to note that I was not talking about the 2009 Lombardi study, but about the 2011 Lombardi "Cytokine study" (I incorrectly said 2010 earlier)
http://iv.iiarjournals.org/content/25/3/307.full.pdf html

In this study, samples from 118 XMRV positive patients were obtained, according to the authors. What's important to note, however, is that the authors didn't obtain these from a larger group of ME/CFS patients, some or most of which were postive for XMRV. No, they actually selected the 118 ME/CFS patients for these study before their XMRV findings, checked these 118 patients later, and found all of them to be positive for XMRV (!!). You can verify this from these May 2009 slides:

http://www.wpinstitute.org/news/docs/Invest_in_ME_20090529_Mikovits.pdf

See slide, 3 and slide 15-23. On a side note, I think it's pretty telling that in these slides Daniel Peterson was noted as a collaborator but was omitted from the final paper.

To me, this study, in combination with the slides, is extremely damaging to Mikovits's and Lombardi's image as objective, self-crititical scientists. There is just no way that, given all the problems with detecting XMRV in the scenario that it's present, you can find your whole pre-selected cohort of 118 ME/CFS patients to be positive for XMRV.
To confirm what Jace said, it's quite clear from the paper that they went back to their original repository (actually Peterson's Incline repository, I think) and examined the samples of the 118 patients from that cohort who they had subsequently tested positive for XMRV. In this study they examined their cytokine and chemokine profiles to look for a signature characteristic of those patients, and from a multi-variate cluster analysis they produced a model that identified 128 of 138 controls (93% specificity) and 113 out of 118 patients (96% sensitivity). I really don't understand how you've managed to read this situation differently.

Is this "118/118" analysis based on your own reading of the paper and slides, or based on the interpretation of some other source? If it's the latter, then I suggest you reconsider the credibility and objectivity of that source, because that analysis is nonsensical, it's not justified by the data, and it's clearly coming from an extremely biased perspective. The claim is constructed entirely in the mind of the person who dreamt it up, and any allegation of damage to someone's image as objective and self-critical properly belongs to the person who said that this paper claimed to have found 118/118 samples positive for XMRV. Jace's quote from the paper makes this quite clear:

One hundred and eighteen specimens from patients who tested positive for XMRV and with a confirmed diagnosis of CFS at the time of collection were obtained from the Whittemore-Peterson Institutes’ sample repository. All specimens used in this study were heparinized plasma and represented a female to male ratio of approximately 2 to 1, consistent with previously reported CFS distributions.


The paper and the slides make no mention of further XMRV testing for this paper, and I agree with Jace, I can't find any mention of cohort selection in the slides you cite either, except that slide 3 says that they used RNA, DNA and plasma taken from about 100 (118 presumably) of the 300 Incline Village CFS cohort samples in Sep 06 and July 07.

Your interpretation, RRM appears to assume that when slide 3 says that the RNA, DNA and plasma were taken in Sep 06 and July 07, that earlier time point was a definition of which 118 patients they would use in this 2011 study. On the contrary, it seems fairly clear to me that they are not saying that at all: they are saying that they looked at which patients from this cohort they had tested positive for XMRV, found 118 of them, then went back to the samples drawn in Sep 06 and July 07, pulled out the relevant subset of the 300 representing the 118 who they tested positive for XMRV, and analysed the cytokine and chemokine profiles of those patients.

I now recall some of the arguments made against the WPI and Dr Mikovits at the time about this and several other issues. I feel a slight sense of nausea as I recall some of the outrageous, completely misconceived and contemptuous criticisms that were bandied around on sceptics forums back in 2009. This particular criticism about an incorrectly-alleged "118/118" finding being "extremely damaging to Mikovits's and Lombardi's image as objective, self-crititical scientists" reminds me of some of those criticisms. It makes such a big leap in its obviously incorrect assumption about the interpretation of the words presented that it looks to me like evidence that whoever originally came up with this interpretation really wasn't listening to what was actually said at all. To mis-read somebody's words and make assumptions about what those words meant is one thing, but to do so in a way that artificially creates a ludicrous state of affairs and then scoff about the scientists for saying something ridiculous, when it's actually quite clear from the evidence that they never said that at all, suggests that the scientists aren't being assessed in a fair and unbiased way.

I'm reminded of several other phoney criticisms that were bandied around in 2009. One in particular always sticks in my mind: it was alleged that there was a clear contradiction between the WPI 'saying they collected samples from US patients' and then, later, asserting that they had found positives in the study from patients elsewhere in the world. The actual sentence in the original paper said (paraphrasing slightly) that the patient samples 'had been collected from practices in areas in the US where outbreaks had occurred', and the WPI subsequently clarified that they had been surprised, when they unlocked the coding, to discover that some of those samples had been from patients who had travelled from Europe to those US clinics for treatment. Thus there was no contradiction, but it was presented as such, mocked, and when the clarification was presented it was further ridiculed as if this was some kind of 'changing of the story' and 'weasel words'. But actually, the WPI's words and meaning were quite clear and accurate, and the only error was in the biased and sloppy interpretation of those words by (pseudo)sceptics. And yet all the mud from this relentless campaign of FUD seemed to stick. If there's one overriding reason why people like me continue to stick with defending Dr Mikovits and continue to keep an open mind about the HGRV science, it's because I observed in detail the outrageous inaccuracy and knee-jerk unfounded vitriole of that campaign against the WPI right from the word go - it's left me with an abiding suspicion that this science was never given a fair hearing from the outset.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Don't believe everything you hear. First of all, the 5-AZA problems have really nothing to do with the judicial proceedings against Mikovits. It's her own choice not to have properly explained this. Second, Frank Ruscetti could provide the community with a proper explanation.
Since she was subject to legal proceedings at the time, and after all the criticism of her over the way she communicated with patients, I don't read much into Mikovits' choice to not respond to this particular allegation in detail. All I know is that we have only heard one side of this story, and I'm therefore not confident that the criticism is quite what it seems. In every other such criticism I've seen over the last few years, when the full clarification was provided (which frequently took a while) it was more than satisfactory.



First, I diddn't (want to) suggest that the 5-AZA influenced the actual percentages.I thought it was pretty clear that the 67% was obtained through their PCR tests, and that the other tests were (more or less) follow-up tests to confirm these results. Nevertheless, these followup confirmation tests should be performed with the same scientific rigor as your initial tests.

But the context of your statement about 5-AZA was a discussion about the different results between patients and controls. The suggestion that the patient and control samples had been treated differently was clearly implied. It would have been only fair to clarify the point that I highlighted: that this issue had no bearing on the original 67% results, and occurred after that. If you don't make such clarifications when making such arguments, it's obviously subject to misinterpretation, and comes across to me as spin, because it puts doubt in the mind of the reader and fails to give the proper context. I then see plenty of misreading of such comments in a game of chinese whispers, and I've already seen that happening again over the 5-AZA issue.

Yes, but they didn't treat the controls with the same substance in this particular experiment.
However, that seems a fair criticism if the two sets of images (patients and controls) were presented side by side without explanation that one of them had been enhanced. That would seem to be a 'creative' way of over-emphasising results obtained in a graphic form. But I'm not clear to me that is really what occurred, and I'm sceptical about that until it's proven to me that it did, and until I've heard a defence of that from the researchers themselves.



It's like saying I have found a way to discriminate ME/CFS blood samples from healthy controls: when I add a bit of blue paint to the ME/CFS samples, they turn purple, while the (untreated) healthy control samples stay red.

No matter what the explanation, in a test like this, you handle every sample the same. You don't add a substance to just one group of samples and then run a (blinded) test. Your test becomes useless because of it.


As I say, I don't think we've heard the full story on the addition of 5-AZT, but it seems to me a bit more like saying: I've found a test that distinguishes this group from that group under blinded conditions, I've now picked out some of the strongest signals, and run the test again, and it then turns out that a treatment applied to the positive signals after the first test unexpectedly amplifies the signal in the images presented. I agree that what happened does sound misleading, but it's based only on the case for the prosecution, which is based on an interpretation of what the defence appears to have done, and so I reserve judgement on that matter until I've heard the case for the defence.



I think it is unrealistic to expect all questions like these to be answered.

In general, science is much more efficient at answering generable questions than at solving singular problems. Singular problems are like murder investigations: you have to depend on the available evidence and in many cases you cannot devise experiments (that can be repeated) to really test the hypothesis. That's not to say it's always a problem: many murder cases are very clear cut and so are many questions in science answered because of convincing evidence.

If I tomorrow announce that the Laws of Newton do not apply under a certain set of conditions and get published, what would you expect would happen? My results will get retested by others. When they are unable to reach my results, these other scientists will just move on. Wouldn't they be interested in what actually went wrong in my lab? Sure, but it isn't necessary to know what went wrong to conclude that I am wrong and, second, in many cases it is just not possible to find out what really happened. What if I don't co-operate and don't let anyone check my notebooks/experimental environment? What if I just made up my results without leaving any trail of evidence?

The same applies here. In all probability nobody (outside of the original investigators themselves) will ever be able find out how these fragments entered the Lombardi et al. and Lo et al. samples. I seriously wouldn't know how to accomplish this. If you can propose an experiment (or really anything) that could lead to finding out what really happened, I would be very interested in hearing it.

I think that's a fair argument in general, but the real issue here is deeper than that. This is not really just about one specific murder investigation, it's about a string of murders going back several decades - this sort of thing keeps happening all the time, and nobody seems to quite know why. Robin Weiss had a similar case a couple of decades ago, for example, and he himself never got to the bottom of why it happened. But already we know much more about how such murders can occur because we've found out more about potential sources of contamination. That sort of investigation has the potential to improve methodology in general and prevent such confusion happening again in the future. There are lots of actual murder cases that remain open and may never be solved, but detectives in such cases keep the file open, don't consider the case solved, and periodically people do return decades later and continue to investigate the case. I'm suggesting that unless and until the questions in this case are answered, the case remains unproven.

I think the potential means to explain what happened lie in a deeper understanding of both contamination issues and of false negatives in PCR. Those questions are still being explored, and much has actually been learned already which is relevant (contaminated cell-lines, inhibition of PCR by previously positive results), but some of the 'false positives' are still unexplained and I think it's got to be worth getting to the bottom of those questions. Contamination or no, we are definitely talking about retroviruses that can infect human cells floating around in unknown ways, so it's surely got to be worth finding out more about how that is happening.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
One further comment: I was surprised to learn in the process of following this issue that it isn't standard practice to treat patient and control samples in the same way. In particular, it isn't standard practice to draw samples from patients and controls from clinics at the same time, in the same place, using the same methods.

I would have naively assumed that would be standard scientific method, but it turns out that nobody was doing this, and even the negative studies were drawing frozen samples from banks and comparing them with fresh samples from controls. I guess this because it is difficult and more expensive to arrange fresh cohorts for every study, but I still find this standard methodological weakness puzzling.

It's been said that retrovirologists didn't know this was important until now. Again that puzzles me because I had assumed that in science when you want to do an accurate experiment and explore the frontiers of knowledge, you try extremely hard not to make assumptions. You design your methodology to eliminate such potential confounding factors, both known and unknown.

And I really don't get why it's so prohibitively expensive to call in a cohort of patients to give blood and take samples from matched controls at the same time, when the experiments after that are using incredibly expensive cutting-edge equipment and are performed by very highly-paid professionals. With all the care that goes into processing the samples, why is it so much more expensive to get a set of patients in to give the blood used in the test under controlled conditions?

I'm sure there are reasons for this, but I'd like to know what they are because my guess is that they involve unnecessary barriers which could fairly easily be overcome. There's no shortage of patients who'd be quite happy to co-operate in making it easier to provide samples in a more controlled way, and they could even identify suitable matched controls (friends of similar age from the same area) and go in to their local surgery together to have blood drawn locally at the same time, using a kit provided...and it doesn't seem to me that this needs to be such an expensive operation if it were typical for future studies to be conducted in this fashion. Even blood banks could be stored with matched controls drawn at the same time, and coded to enable blinding.

There may be good reasons why parallel blood sampling has not been considered important enough in the past for scientific accuracy, or why this is prohibitively expensive, but I'd be interested to know what those reasons are, if anyone out there knows, and it does seem to me that an efficient solution to make this methodology standard is clearly indicated by the XMRV experience.
 

RRM

Messages
94
I find no mention there of cohort selection
Cohort selection is explained in slide 3.

Indeed, there is no result in that .pdf when I search for XMRV

This is exactly the point. These slides from May 2009 show the very same results with 118 ME/CFS patients as were later reported (in the published paper) with XMRV-positive ME/CFS patients. This (and the timeline) indicates that 118 patients were first selected for the study and only later were tested for XMRV. Rather coincidentally, all 118 were positive for XMRV, enabling the authors to rephrase their results from "ME/CFS patients have the following cytokine signature" to "XMRV-positive ME/CFS patients have the following cytokine signature.

Check from 3:18 of this video to verify that I am correct:



You can see Lombardi acknowledging that these 118 patient were actually selected for their ME/CFS status and not for their XMRV+ status, and that these 118 persons just happened to test positive for XMRV.

With false negativity being an apparently significant problem, these results of finding 118 of 118 pre-selected patients to be positive for XMRV, are realistically impossible to obtain.
They are saying, that from their positive sample selection stored in their repository, they chose 118 samples from patients with a viral onset diagnosis of ME/CFS.
Nope. They might be implying that, but it isn't what really happened.

They obtained 118 ME/CFS patients and then performed the whole study. After the results were in, they checked these 118 patients for XMRV status and found all of them to be positive. See the Lombardi interview if you still have doubts about this.

This is really only explainable by (a terrible amount) of confirmation bias.

Because they wanted to amplify the image of the positive result that they had already found without the 5-AZA, perhaps?

That is no reason at all to not treat the controls with the same substance.

The only reason why you'd not want to do that is if you know that, then, the controls would show (about) the same amount of amplification. Which, again. is exactly the reason why you really need to handle patients and controls the same way.

Mark said:
I really don't understand how you've managed to read this situation differently.

Although I feel this should be clear from the (May 2009) slides, see the video above. Lombardi actually confirms that the research was performed before they even found out about the existence of XMRV.

Do you agree that it is quite alarming that they found 118 of these 118 patients to be positive for XMRV?

But the context of your statement about 5-AZA was a discussion about the different results between patients and controls. The suggestion that the patient and control samples had been treated differently was clearly implied.

Like I explained with the murder investigation examples, you cannot expect to solve every question. The 5-AZA debacle shows that these investigators clearly let some sort of confirmation bias enter into (at least some of) their experiments.

If Mikovits would release (like she accidentally did with the 5-AZA stuff) some of the underlying data of their PCR experiments, it would perhaps be possible to assess what went wrong. As it is, Mikovits has never even released a single PCR gel from her original experiments (the gels from the original paper were Silverman's).

If you want my guess (but I agree it's just speculation):

- It is known that Max Pfost did the first (PCR) testing on 20 samples, and found 2 XMRV positives. After several rounds of tweaking and restesting, all 20 were found to be positive.

Now, my guess is that during all this repeated tweaking, they did not really get better tests (as later blinded retests have shown), but that they just contaminated more and more of their samples. Now (and this is really unknown), but my guess is that they then extended their tests to other patient samples in their repository and not to controls (which were probably purchased at a later time, for the specific purpose of doing the study when the decided to do this). They then contaminated more samples.

After they "knew" they were really onto something (i.e. XMRV infection is correraled to having ME/CFS), they decided to do a "real" study on this, and they then got into contact with the Ruscetti's, purchased control samples, mixed them up with their patient samples (many of which were contaminated by then), ran their "offical study PCR test" on all samples (contaminating some more patient samples and some of their fresh control samples), and so arrived at their 67% vs 4%.

But again I agree this is mere speculation and we'll probably never know.
 

jace

Off the fence
Messages
856
Location
England
The Incline Village cohort originally was 300. They chose the ~100 that were XMRV positive and isolated RNA and DNA. That number is likely to be the 118 referred to in the Cytokine paper from 2011, but it is not explicitly stated. They took those ~100 samples forward to cytokine testing. but as Lombardi says in the video, that work was done before he knew that XMRV existed. The results, as you say, are on the slides on pages 15 - 23 of the .pdf. They then ran a random forest algorithm on those cytokine results, with fairly conclusive results. Subsequent to this work, the work of XMRV testing was done on a group of samples, which is the basis of Lombardi et al 2009. I don't see anywhere where it says that they first chose the samples before testing at all and then tested for XMRV with 100% positive results. What they are saying is that of those with the cytokine profile common to people with well-defined ME/CFS, 96% are XMRV positive.

from the Cytokine paper
"The final model accurately identified 128 out of the 138 controls (93% specificity) and accurately identified 113 out of 118 patients (96% sensitivity) (Table IV)."

Below I have embedded slide 3.

ScreenShot2012-05-27at174304.png

The Video you embedded has to be watched on youtube. Watching it, I realise why you've misunderstood. You are assuming that Lombardi is referring to the XMRV testing, when actually he is referring to the ability of the Random Forest Algorithm to pick out those cytokine results of samples that were also found to be positive for retroviruses in the 2009 paper.

ETA at least, that's how I interpret it.

At this time, I am not really interested in speculation about what may or may not have happened. It's too late for that. What I want is evidence that stands up to scrutiny. So much of it does not.
 

RRM

Messages
94
No, I have not misunderstood, but perhaps I wasn't clear enough.You have to look from 3:18 onwards. At 3:18, he starts about the cytokine study. For your reference, I have transcribed the relevant part. Lombardi literally says the following about the Cytokine study:
Vincent Lombardi said:
"That work was able to accurately distinguish Chronic Fatigue [sic] from healthy controls about 95% of the time, in that group we were loooking at. And it happened to be that when we we were doing the XMRV research they all corresponded to XMRV, so that work was published as a XMRV cohort of Chronic Fatigue Syndrome. Now in actuallity that work was done before I ever knew XMRV existed, so it would really stand on its own, at least based on the cohort of patients that we have."

Finally, look at the timeline. The presentation is from May of 2009 (the same month as the first submission of the original Science paper). Max Pfost finding the first positives with tweaking the PCR assay happened in November/December of 2008. There is no way that this whole cytokine study was then cramped in somewhere between these months as well, as you seem to think must have happened, especially not since Mikovits naturally diverted all resources to the XMRV finding itself after Max Pfost's discovery.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
From the Lombardi "Cytokine" paper:
"Patients and controls. One hundred and eighteen specimens from
patients who tested positive for XMRV and with a confirmed
diagnosis of CFS at the time of collection were obtained from the
Whittemore-Peterson Institutes’ sample repository.
...
"All patients described in this study represent
well-defined cohort of CFS patients who are known to have a ‘viral
or flu-like’ onset and have tested positive for XMRV by PCR and/or
serology."

http://iv.iiarjournals.org/content/25/3/307.full.pdf


To watch the video that RRM posted on youtube, click here.
 

Sam Carter

Guest
Messages
435
I've tried to highlight the parts of the study that I found interesting, and also to enumerate the actual results which I don't think we're very clearly presented in the paper.

The first conclusion one can draw is that, in this instance, PCR isn't the correct tool -- the results from nested PCR don't agree with those from an equally sensitive single-round alternative, and on one occasion the choice of reagent affected the detection rate by a factor of 2.

It also seems that neither mouse mtDNA nor IAP assays can conclusively rule out contamination, and together with the fact that no (confirmed) positives were found once better precautions were put in place does argue in favour of the preliminary results being wrong, although I agree that they are hard to explain on the basis of chance alone.

Lyndonville (David Bell's) cohort:

Combined total of positives from single-round and nested PCR on whole blood or PBMC DNA:

- 5/10 severe CFS
- 2/10 recovered CFS
- 3/20 controls

Combined total of positives from single round and nested PCR on LNCaP incubated with plasma gDNA:

- 3/10 severe CFS
- 5/10 recovered CFS
- 1/20 control CFS

- all of these samples were negative in mouse mtDNA assays.

Ithaca control cohort, tested by single-round PCR of PBMCs

- 0/12 positive

Susan Levine's cohort, tested with single-round PCR on gDNA

- 0/20 CFS patients +ve
- 0/4 controls +ve

Contamination of the LNCaP master cell line?

- their LNCap cell-line probably became contaminated
- they tested the uninoculated LNCaP master cell line by single-round PCR and found:
--- 8/84 replicates +ve for gag sequences (this should have been 0/84 if there was no contamination - and assuming I've understood this section correctly...)
--- 0/10 replicates negative for IAP (ie. the IAP test failed)
--- their reagents, HotStart-IT FideliTaq master mix (USB) and NEB master mixes, were also negative for IAP

Because of this they looked at an earlier sample of the same LNCaP cell-line and found:
- 1/21 replicates +ve for gag sequences with the USB master mix
- 4/41 replicates +ve for gag sequences with the NEB master mix

ie. this was also contaminated and USB/NEB master mixes gave different results.

Eventually new LNCap cells were bought:
- 0/21 replicates showed gag sequences with the USB master mix
- 0/41 replicates showed gag sequences with the NEB master mix

ie. the new batch was uncontaminated.

""While we were carrying out the experiments on the Western New York samples, a number of reports began appearing that indicated problems with environmental contamination with mouse
DNA as well as the presence of mouse DNA in common laboratory reagents. We therefore obtained a second set of samples from the office of Susan Levine...Single-round gagL PCR was performed on gDNA, and all assays were negative. These assays were performed in a different room in the Cornell laboratory that had improved environmental isolation over the one used for the experiments performed on the initial set of samples from the Lyndonville office.[David Bell's cohort]""

""Absence of mouse DNA cannot, however, rule out the possibility of environmental amplicon contamination.""

""Nested PCR analysis of our initial batch of 30 samples resulted in a significant difference in frequency of gag PCR products between patients and controls; however, continued analysis failed to maintain this association. If sporadic contamination of reagents and/or environmental contamination was the source of the gag sequences, then a possible explanation for the initial association could be due to non-random receipt of patient and control samples...However, we did not observe any clear correlation between day of receipt of samples and whether they were positive by PCR for MLV-like sequences.""

""We chose ... to reduce the possible environmental contamination of PCR assays by developing a single-round assay that can detect spiked plasmid DNA with the same sensitivity as the
previously described gagO/gagI nested PCR. When this assay, which requires less manipulation than nested PCR, was used on a new set of DNA samples, we did not detect MLV-like gag
sequences.""

""As long as the IAP assay is negative, a positive gag signal with the nested gagO/gagI PCR must not be due to the presence of mouse DNA. However, the IAP assay does not allow determination of contamination due to the presence of gag RNA, nor can it reveal the presence of PCR fragment carryover between experiments.""