• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

New research suggests more problems with the Paprotka et al XMRV recombinant paper

jace

Off the fence
Messages
856
Location
England
http://www.ncbi.nlm.nih.gov/m/pubmed/22031947/
Characterization, mapping, and distribution of the two XMRV parental proviruses.

Cingz O, et al. were unable to detect integration sites for preXMRV1 in either Hsd or NU/NU mice. In addition only 40% of NU/NU mice contained any evidence of preXMRV-1 itself, says a new paper published in January's Journal of Virology.

From the findings in the above paper, it appears that Paprotka et al failed to screen any wild mice for any viruses which could be the source of murine related retroviruses, so the assumption they make is that any MRV provirus which does not have a 24 nt deletion cannot be a MRV provirus is like saying that any HIV provirus not containing a single deletion sequence is not HIV, yet deletion mutants of HIV are commonplace and no-one has proved that the same is not the case in other retroviruses.

The failure to find any evidence that preXMRV1 is integrated into the genome of mice that are alleged to have been used in the formation of the 22rv1 cell line, together with the small number of NU/NU mice found to contain pre xmrv1 sequences, strongly suggests that preXMRV1 is a PCR contaminant. Dr John Coffin detected preXMRV1 integrated into the DNA of a strain of mouse supplied by Greg Towers who works with the Welcome Trust. Perhaps this is the source?
 

joshualevy

Senior Member
Messages
158
Jace, I think you are confused, or commenting about one study while linking to another.
The study you linked to doesn't talk about contamination at all, and supports Paprotka in several ways. Below are the important quotes from the abstract:
  • The chromosomal loci of both proviruses were determined in the mouse genome, and integration site information was used to analyze the distribution of both proviruses in 48 laboratory mouse strains and 46 wild-derived strains.
  • The strain distributions of PreXMRV-1 and PreXMRV-2 are quite different, the former being found predominantly in Asian mice and the latter in European mice, making it unlikely that the two XMRV ancestors could have recombined independently in the wild to generate an infectious virus.
  • among the wild-derived mouse strains analyzed, not a single mouse carried both parental proviruses.
  • PreXMRV-1 and PreXMRV-2 were found together in three laboratory strains, ..., consistent with previous data that the recombination event that led to the generation of XMRV could have occurred only in the laboratory.
  • The three laboratory strains carried the Xpr1(n) receptor variant nonpermissive to XMRV and xenotropic murine leukemia virus (X-MLV) infection, suggesting that the xenografted human tumor cells were required for the resulting XMRV recombinant to infect and propagate.

So this study supports the idea that XMRV was created in a lab that was doing tumor cell research.

Joshua (not Jay) Levy
 

RRM

Messages
94
- This is not really a new paper. It was published online in October of 2011.

- These results are actually supportive of the Paprotka et al. findings. The authors themselves state that these data suggest that "the xenografted human tumor cells were required for the resulting XMRV recombinant to infect and propagate."

As to the specific points raised:

http://www.ncbi.nlm.nih.gov/m/pubmed/22031947/ ]From the findings in the above paper, it appears that Paprotka et al failed to screen any wild mice for any viruses which could be the source of murine related retroviruses, so the assumption they make is that any MRV provirus which does not have a 24 nt deletion cannot be a MRV provirus is like saying that any HIV provirus not containing a single deletion sequence is not HIV, yet deletion mutants of HIV are commonplace and no-one has proved that the same is not the case in other retroviruses.
This is a classic strawman. The authors investigated the origin of XMRV, not the origin of MRV's in general. Besides, no 'MRV' has been fully sequenced from humans, and so it would be hard to investigate on the origin of these other MRV's.

The comparison with HIV fails BTW. It's not just about sequence diversity. Findings of XMRV and the other MRV's in humans are too far removed from each other from an evolutionary standpoint to (realisticaly) reflect the same source.

The failure to find any evidence that preXMRV1 is integrated into the genome of mice that are alleged to have been used in the formation of the 22rv1 cell line, together with the small number of NU/NU mice found to contain pre xmrv1 sequences, strongly suggests that preXMRV1 is a PCR contaminant.

This appears to be a very strange and inconsistent argument. If not being able to find integration sites in some of the authors' samples is so damning to their findings, then surely the findings of Lombardi et al. and Lo et al., who both have failed to obtain any integration site from any of their patients after two years, must then surely be even more problematic?

Moreover, the distribution of preXMRV1 (and preXMRV2) among NU/NU mice was already party investigated in the original Paprotka et al. paper. Please check figure S6C for the results. The fact that these new results are consistent with the old data, is actually strongly supportive of their findings.

On a final note: this is one of the occasions where it would be pretty easy to test (part of) the authors' data. Anyone (Sandra Ruscetti for example) could collect a couple of NU/NU mice from different sources and check them for preXMRV1.
 

joshualevy

Senior Member
Messages
158
Jace wrote:
The failure to find any evidence that preXMRV1 is integrated into the genome of mice that are alleged to have been used in the formation of the 22rv1 cell line, together with the small number of NU/NU mice found to contain pre xmrv1 sequences, strongly suggests that preXMRV1 is a PCR contaminant.

RRM wrote:
This appears to be a very strange and inconsistent argument. If not being able to find integration sites in some of the authors' samples is so damning to their findings, then surely the findings of Lombardi et al. and Lo et al., who both have failed to obtain any integration site from any of their patients after two years, must then surely be even more problematic?

This is one of the standard end-games of bad science.
You start out with one study that shows something you like.
Then there are studies showing it isn't true, so the supporters point out "flaws" in those new studies. However, as there are more and more new studies, the proponents need to come up with more and more "flaws". Eventually, they start listing things that the supportive studies did as well. And you end up with the double standard you see here.

Just to pour some gasoline on the fire, one example of this is PACE vs. Rituxumab. Many people complained that PACE did not use enough objective criteria (it only had one), so it was flawed for that reason. But the Rituxumab study, which those same people loved, used only subjective data for results! It had no objective measures at all, and the paper is very clear on that.

Joshua (not Jay) Levy
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Just to pour some gasoline on the fire, one example of this is PACE vs. Rituxumab. Many people complained that PACE did not use enough objective criteria (it only had one), so it was flawed for that reason. But the Rituxumab study, which those same people loved, used only subjective data for results! It had no objective measures at all, and the paper is very clear on that.
There is so much that is wrong about the above comment that I have nowhere near enough time to lay it all out. So just a few pointers...

The PACE trial was a 5m, massive government study, carried out over several years, consuming almost all the government funding for ME research during those years, and it was a massively flawed and frankly dishonest, borderline fraudulent study in many, many ways. Among the many complaints that many reasonable people made about it were issues concerning the criteria it used (both for the definition of the condition studied and for the assessment of outcomes). The ways that PACE used subjective criteria for assessment of outcomes were severely flawed in multiple ways, even as subjective criteria. Objective criteria (though no measures of any biomedical parameters) were included in the original design of the study, but were dropped part way through, almost certainly because the authors discovered partway through that these objective measures would prove that the small improvements found on the subjective measures were entirely illusory. There are perfectly valid, very strong reasons for patients to be outraged by the use of outcome criteria in PACE.

But the complaint about the PACE study not using the objective criteria that it originally said it was going to use bears no comparison with the much smaller Rituximab trial, which was a completely different kind of study. The Rituximab trial was a first study, an experimental study, designed to explore whether the treatment was effective or not, whereas the PACE study was a political study designed to provide usable evidence to argue the case that CBT and GET are effective treatments (the findings showed that they aren't, so it had to be spun to say the opposite). The Rituximab study aimed to discover knowledge (whether an intervention might be effective or not) whereas the PACE trial aimed to prove, by 'evidence-based' (sic) standards, that CBT and GET - established treatments already in widespread use - were worthwhile (an aim in which it failed dismally).

So it would be entirely inappropriate to compare apples and oranges here when critiquing the criteria of these two studies in detail. Nevertheless, CBT proponent van de Meer (?) really did apply the double standard joshua levy ascribes to the PACE critics, by publicly criticising the Rituximab trial for only using subjective measures - just like all of his own work does, and just like PACE did. Whereas the critics of PACE on this forum did, in fact, regret the failure of the Rituximab trial to include objective criteria - and we also conducted additional statistical analysis which showed that the effect shown by the Rituximab trial's SF-36-based outcomes was considerably greater than that shown by the PACE results on the same format. Here, due to the use of SF-36 by the Rituximab trial, we were able to compare like with like to some extent - which is presumably why the Rituximab trial authors used the same criteria used by the CBT proponents.

So in summary:
- the two studies are different in so many ways that the comparison Joshua makes is invalid.
- the PACE trial has no excuse for omitting the objective criteria laid out in its experimental design.
- the Rituximab trial most probably used subjective criteria because those criteria have been set out as the 'standard' by people like the PACE authors and so such criteria were necessary for comparison and acceptance.
- critics of PACE did also regret the lack of objective criteria in Rituximab, and that's well documented right here on this forum, so this (unevidenced) part of joshua levy's criticism of patients is a strawman argument
- critics of PACE had many, many other legitimate criticisms of PACE, which are infinitely more serious than any purported flaws in the Rituximab trial
- patients prefer the Rituximab trial to the PACE trial because the latter is attempting to provide misleading evidence, to be used politically, for a series of propositions about CBT, GET and Pacing which are patently untrue, whereas the Rituximab trial is a honest attempt to determine whether a promising real treatment is effective or not.
- nevertheless, patients did, fairly, note the lack of objective criteria in the Rituximab study, and CBT proponents - recognised researchers - were, in fact, the ones who applied precisely the double standard that Levy ascribes to patient critics of PACE.

This is Alice in Wonderland stuff, Joshua. Or perhaps 'Through the Looking Glass' is most appropriate. Neither side holds ownership of any argument; it's either a valid argument or it isn't. And even when double standards are applied, as they are by individuals on both sides of any argument, a reasonable criticism is a reasonable criticism, whoever makes it, and regardless of what they may have said in the past.

As for the XMRV example: How can it make sense to argue that critics of a negative study may not use the same argument (lack of sequence integration) that those on the other side of the argument used against the original positive study? That lack of proof of sequence integration was always accepted as a weakness (not a fatal flaw) in the original studies; that 'gold standard' was obviously desirable. And if that's a valid criticism of Lombardi et al, then it's a valid point to make about preXMRV1 in mice as well: if that has not been shown than that's a weakness there too, though not a fatal flaw. And if that finding in mice lacks the gold standard confirmation, then it's perfectly reasonable to say that leaves the door open for contamination to be the explanation, just as it leaves the door open for a contamination explanation in Lombardi et al. Why on earth should Paprotka et al not be critiqued according to the same standards applied to Lombardi et al?

So if what you're saying boils down to: "You can't use that argument; that was our argument against your work!!!", when the same argument is valid in both cases, then surely it's obvious to anyone where the true double standards are being applied?
 

RRM

Messages
94
As for the XMRV example: How can it make sense to argue that critics of a negative study may not use the same argument (lack of sequence integration) that those on the other side of the argument used against the original positive study? That lack of proof of sequence integration was always accepted as a weakness (not a fatal flaw) in the original studies; that 'gold standard' was obviously desirable.

Of course one may *use* the same argument. However, to assert that a certain study that one doesn't like is the likely result of contamination, almost solely based on an argument that also "hinders" (and much more strongly to boot) the study that the person in question supports, is inconsistent.

As it is, the data in favor of only some NU/NU mice containing preXMRV1 is very convincing and can be easily tested by people that doubt the data (which is also what happened in the example, I might add). In any case, the data in this study is supportive of, and consistent with the data in Paprotka et al., and does not in any way 'suggest' any 'problems' with the Paprotka et al. paper.

And if that's a valid criticism of Lombardi et al, then it's a valid point to make about preXMRV1 in mice as well: if that has not been shown than that's a weakness there too, though not a fatal flaw.

You are right in saying that showing integration sites in ALL of the samples that contain XMRV would provide even better evidence, but there are always things that would make the evidence even stronger. As far as I know, it's prefectly normal not to obtain integration sites from all samples in a single study.

Therefore, it would go too far to regard it as a flaw or a weakness. It's more like having a murder trial with a CCTV footage, DNA evidence and two witnesses, all implicating the suspect. Even then, it would be nice to have had twelve witnesses. ;)


From your line of reasoning, I find this better comparable with the fact that Lombardi et al. did not find XMRV in all samples tested but in "only" 67% of them (although it was later stated that almost all of the remaining samples did contain XMRV too). I wouldn't call this a flaw of the original study either, nor that this particular piece of data of was 'strongly suggestive' of contamination in itself, and neither did anyone else. You just don't expect these experiments to be 100% perfect.

So if what you're saying boils down to: "You can't use that argument; that was our argument against your work!!!", when the same argument is valid in both cases, then surely it's obvious to anyone where the true double standards are being applied?

Nobody has ever argued that Lombardi et al. should report integration sites in all of the samples that they asserted contain XMRV (or HGRV), and that the faillure to do so would mean that their findings were most likely the result of contamination on this basis.

So, yes, it is really inconsistent to use this as an argument against the Paprotka findings, when you still support the Lombardi et al. findings that have found zero integration sites.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Many people complained that PACE did not use enough objective criteria (it only had one), so it was flawed for that reason.

The problem with nonpharmacological studies is that they are not blinded and are therefore subject to many biases. Using objective outcomes is necessary for studies where blinding is not possible. But on the PACE results, let me put it this way - if those were the results of an unblinded drug trial, that drug would not be approved.


PS, the PACE trial also deviated strongly from their original (published) protocol - including the primary outcome - the thresholds for 'clinically significant improvement' and no data was provided which meet the protocol criteria for 'recovery' (the authors later claimed they had planned to publish such data, but we still haven't seen it). Given how anticipated the results were, you have to ask why?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
There is so much that is wrong about the above comment that I have nowhere near enough time to lay it all out. So just a few pointers...

The PACE trial was a 5m, massive government study, carried out over several years, consuming almost all the government funding for ME research during those years, and it was a massively flawed and frankly dishonest, borderline fraudulent study in many, many ways. Among the many complaints that many reasonable people made about it were issues concerning the criteria it used (both for the definition of the condition studied and for the assessment of outcomes). The ways that PACE used subjective criteria for assessment of outcomes were severely flawed in multiple ways, even as subjective criteria. Objective criteria (though no measures of any biomedical parameters) were included in the original design of the study, but were dropped part way through, almost certainly because the authors discovered partway through that these objective measures would prove that the small improvements found on the subjective measures were entirely illusory. There are perfectly valid, very strong reasons for patients to be outraged by the use of outcome criteria in PACE.

But the complaint about the PACE study not using the objective criteria that it originally said it was going to use bears no comparison with the much smaller Rituximab trial, which was a completely different kind of study. The Rituximab trial was a first study, an experimental study, designed to explore whether the treatment was effective or not, whereas the PACE study was a political study designed to provide usable evidence to argue the case that CBT and GET are effective treatments (the findings showed that they aren't, so it had to be spun to say the opposite). The Rituximab study aimed to discover knowledge (whether an intervention might be effective or not) whereas the PACE trial aimed to prove, by 'evidence-based' (sic) standards, that CBT and GET - established treatments already in widespread use - were worthwhile (an aim in which it failed dismally).

So it would be entirely inappropriate to compare apples and oranges here when critiquing the criteria of these two studies in detail. Nevertheless, CBT proponent van de Meer (?) really did apply the double standard joshua levy ascribes to the PACE critics, by publicly criticising the Rituximab trial for only using subjective measures - just like all of his own work does, and just like PACE did. Whereas the critics of PACE on this forum did, in fact, regret the failure of the Rituximab trial to include objective criteria - and we also conducted additional statistical analysis which showed that the effect shown by the Rituximab trial's SF-36-based outcomes was considerably greater than that shown by the PACE results on the same format. Here, due to the use of SF-36 by the Rituximab trial, we were able to compare like with like to some extent - which is presumably why the Rituximab trial authors used the same criteria used by the CBT proponents.

So in summary:
- the two studies are different in so many ways that the comparison Joshua makes is invalid.
- the PACE trial has no excuse for omitting the objective criteria laid out in its experimental design.
- the Rituximab trial most probably used subjective criteria because those criteria have been set out as the 'standard' by people like the PACE authors and so such criteria were necessary for comparison and acceptance.
- critics of PACE did also regret the lack of objective criteria in Rituximab, and that's well documented right here on this forum, so this (unevidenced) part of joshua levy's criticism of patients is a strawman argument
- critics of PACE had many, many other legitimate criticisms of PACE, which are infinitely more serious than any purported flaws in the Rituximab trial
- patients prefer the Rituximab trial to the PACE trial because the latter is attempting to provide misleading evidence, to be used politically, for a series of propositions about CBT, GET and Pacing which are patently untrue, whereas the Rituximab trial is a honest attempt to determine whether a promising real treatment is effective or not.
- nevertheless, patients did, fairly, note the lack of objective criteria in the Rituximab study, and CBT proponents - recognised researchers - were, in fact, the ones who applied precisely the double standard that Levy ascribes to patient critics of PACE.

This is Alice in Wonderland stuff, Joshua. Or perhaps 'Through the Looking Glass' is most appropriate. Neither side holds ownership of any argument; it's either a valid argument or it isn't. And even when double standards are applied, as they are by individuals on both sides of any argument, a reasonable criticism is a reasonable criticism, whoever makes it, and regardless of what they may have said in the past.

As for the XMRV example: How can it make sense to argue that critics of a negative study may not use the same argument (lack of sequence integration) that those on the other side of the argument used against the original positive study? That lack of proof of sequence integration was always accepted as a weakness (not a fatal flaw) in the original studies; that 'gold standard' was obviously desirable. And if that's a valid criticism of Lombardi et al, then it's a valid point to make about preXMRV1 in mice as well: if that has not been shown than that's a weakness there too, though not a fatal flaw. And if that finding in mice lacks the gold standard confirmation, then it's perfectly reasonable to say that leaves the door open for contamination to be the explanation, just as it leaves the door open for a contamination explanation in Lombardi et al. Why on earth should Paprotka et al not be critiqued according to the same standards applied to Lombardi et al?

So if what you're saying boils down to: "You can't use that argument; that was our argument against your work!!!", when the same argument is valid in both cases, then surely it's obvious to anyone where the true double standards are being applied?

Thank you for explaining everything with such patience Mark.
Sometimes I expect everyone to know everything that we know about the PACE Trial, but it's not a reasonable assumption to make.
So the PACE Trial does need explaining every now and again.
 

RustyJ

Contaminated Cell Line 'RustyJ'
Messages
1,200
Location
Mackay, Aust
There is so much that is wrong about the above comment that I have nowhere near enough time to lay it all out. So just a few pointers...

This is Alice in Wonderland stuff, Joshua. Or perhaps 'Through the Looking Glass' is most appropriate. Neither side holds ownership of any argument; it's either a valid argument or it isn't. And even when double standards are applied, as they are by individuals on both sides of any argument, a reasonable criticism is a reasonable criticism, whoever makes it, and regardless of what they may have said in the past.

As for the XMRV example: How can it make sense to argue that critics of a negative study may not use the same argument (lack of sequence integration) that those on the other side of the argument used against the original positive study? That lack of proof of sequence integration was always accepted as a weakness (not a fatal flaw) in the original studies; that 'gold standard' was obviously desirable. And if that's a valid criticism of Lombardi et al, then it's a valid point to make about preXMRV1 in mice as well: if that has not been shown than that's a weakness there too, though not a fatal flaw. And if that finding in mice lacks the gold standard confirmation, then it's perfectly reasonable to say that leaves the door open for contamination to be the explanation, just as it leaves the door open for a contamination explanation in Lombardi et al. Why on earth should Paprotka et al not be critiqued according to the same standards applied to Lombardi et al?

So if what you're saying boils down to: "You can't use that argument; that was our argument against your work!!!", when the same argument is valid in both cases, then surely it's obvious to anyone where the true double standards are being applied?

Mark, I lack the scientific background to analyse some of Joshua's response, but the point you raised here was one I did pick up on but couldn't be bothered responding. Unfortunately such a fundamental flaw in reasoning undermines the credibility of the remainder of Joshua's post. Thank you for going to the trouble to respond.

I might add that, apart from Silverman's contribution, the remainder of Lombardi, and Lo et al, has not been disproven, despite their withdrawal - they have just not been validated. Paprotka and others have not been subject to the same levels of criticism and, let me add, have not been validated either. There is a mechanism operating here which does not rely on objective scientific validation, but on value-laden consensus, which is nothing more than a scientific hegemony of self-interest.
 

RustyJ

Contaminated Cell Line 'RustyJ'
Messages
1,200
Location
Mackay, Aust
So, yes, it is really inconsistent to use this as an argument against the Paprotka findings, when you still support the Lombardi et al. findings that have found zero integration sites.

As far as I know, no poster has made this claim. I was under the impression that Cingz O et al made it. I don't really know what Cingz O et al think about Lombardi.
 

Sean

Senior Member
Messages
7,378
Just to pour some gasoline on the fire, one example of this is PACE vs. Rituxumab. Many people complained that PACE did not use enough objective criteria (it only had one), so it was flawed for that reason. But the Rituxumab study, which those same people loved, used only subjective data for results! It had no objective measures at all, and the paper is very clear on that.

Joshua (not Jay) Levy
The Rituximab paper did not use objective (or relatively objective) outcome measures of macro function, such as the 6 minute walk test, actometers, employment status, etc. But they certainly did use an objective, laboratory based, physiological measure through the whole trial:

"B-lymphocytes
Lymphocyte subpopulations, including CD19 positive B cells, were determined in EDTA anticoagulated blood samples before treatment, and during all follow-up visits (2, 3, 4, 6, 8, 10 and 12 months)."


Data from this measure show that all those in the active treatment arm experienced B-cell depletion, but none of the placebo arm did. Combine this clear (and unsurprising) result with the marked difference in clinical response rates between the active treatment v. placebo arms (67% v. 13%), and you have a good connection between active pharmacological treatment and (subjectively assessed) clinical outcome.

However, just to make things interesting...

"These data confirmed the B-cell depletion in Rituximab-treated patients but could not separate the Rituximab responders and non-responders..."
 

RRM

Messages
94
As far as I know, no poster has made this claim.

What specific claim are you referring to? In any case, the original post (and not Cingz et al.) specifically states that a lack of found integration sites in this single study is indicative of contamination.

I am merely saying that it is inconsistent to genuinely believe this and genuinely believe at the same time that the Lombardi et al. findings are A-OK.
 

jace

Off the fence
Messages
856
Location
England
If XMRV was produced by recombination events that are rarer than rocking horse droppings, then the genomic RNA of preXMRV1 and 2 (one strand of each) must be packaged into the same viral particle, but obviously the proviruses must be in the same cell to start with.

There is no evidence of any RNA of any kind specific to preXMRV1 or 2. There is no evidence of the existance of any virion proteins, nor is there evidence of preXMRV1 or 2 integrated into the genomes of either the Nu/Nu or Hsd mice. Plus, there is no evidence of pre XMRV2 in the xenografts before they were inserted into mouse 2152 which was the precursor of the 22rv1 cell line.

All we have is some speculative modelling of what might have happened if all the many assumptions built into the model were valid. The precise modelling for retroviral recombination in vivo is still a matter of considerable debate.

PCR assays with an analytical sensitivity far exceeding the sensitivity of the one used in Paprotka have failed to detect XMRV in prostate tissue which could be detected by IHC (Shlaberg 2009).

We have competing hypotheses.The first is that the VP-62 strain was created by an astronomically unlikely recombination event supported by mathematical modelling but no experimental evidence, and the second is that the PCR assays had insufficient efficiency of amplification when DNA from biological samples were examined, even though the theoretical limit of detection derived by diluting pristine DNA in the laboratory looked reasonable but was not as sensitive as assays which had proved inadequate in the past.

The work obviously needs to be validated and should be repeated using PCR assays with a history of being able to detect XMRV if present. At present there is no experimental evidence supports the existance of preXMRV1 and preXMRV2 RNA, or for the existance of proteins which would indicate the presence of the virions, which are essential for the crossovers to occur in reality.

In the absence of proven integration sites for preXMRV1 or 2 in the Nu/Nu or Hsd mice the results indicate that pre XMRV1 is actually a PCR contaminant.
 

RRM

Messages
94
If XMRV was produced by recombination events that are rarer than rocking horse droppings

This is the misunderstanding on which much of the skeptiscim regarding Paprotka et al. is based.

Fact is that you are extremely rare, I am (phew), my fingerprint is, and the combination of the next three Powerball lottery drawings is extremely rare. All are rarer than "rocking horse droppings", but each of these events has occured, or will occur.

The same with this "rare" event, the generation of XMRV. We know it was generated, because it exists. It's just extremely unlikely to have happened twice. Just like it is extremely unlikely that "Jace" happened twice, or "RRM" happened twice, or my fingerprint happened twice.
 

Esther12

Senior Member
Messages
13,774
PACE vs. Rituxumab. Many people complained that PACE did not use enough objective criteria (it only had one), so it was flawed for that reason. But the Rituxumab study, which those same people loved, used only subjective data for results! It had no objective measures at all, and the paper is very clear on that.

I noticed in another thread that you offered a similarly empty 'defence' of PACE, and then did not reply to any of the responses. It's a bit embarrasing.

If you read the main thread on the rituximab study, you would have seen a number of people (myself included) raising concerns about the outcome measures used - but even if we had not, that would have provided little reason to defend PACE or try to point to some hypocrisy amongst it's critics - the PLOS rituximab study was exploratory and double-blind, and as such is a potentially interesting piece of work. PACE was being presented by it's proponents in a quite different way - most recently to encourage NHS commisioners to fund CBT/GET because PACE showed that they offered a recopvery rate of 30-40% - If you've read the PACE paper and the works it cites, then you should know how misleading this claim is. I you want to defend PACE, then do so properly - otherwise it's just like Jace throwing petrol on the XMRV fire to try to hide the fact that there is nothing of substance there to keep the flame going.
 

jace

Off the fence
Messages
856
Location
England
Well, I'm not concerned about opinions, but the facts speak for themselves once you drill down into the data.

Absence of evidence in Paprotka et al. (2011).


Paprotka et al. (2011) argue that XMRV was created during the creation of the 22Rv1 cell line from the genomic recombination of the genomic RNA of two ERVs they call PreXMRV-1 and PreXMRV-2 co-packaged into the same virion.

Paprotka led by Dr John Coffin used subjective labels to describe these viruses. We will use slightly less subjective labels. Hereafter the ERVs will be called ERV-1 and ERV-2. XMRV will be given the objective label XMRV/VP62.


XmU3f and GAGr primers

"To quantify the amount of XMRV DNA in the CWR22 xenografts, we developed a real-time PCR primer-probe set that specifically detected XMRV env and excluded murine endogenous proviruses present in BALB/c and NIH3T3 genomic DNA (Fig. 1C). We used quantitative PCR of 22Rv1 DNA to estimate 20 proviruses/cell and used the 22Rv1 DNA to generate a standard curve. The CWR22 xenografts had significantly fewer copies of XMRV env (<13 copies/100 cells) compared to the 22Rv1 cells (2000 copies/100 cells). The CWR-R1 cell line had ~3000 copies/100 cells, and the NU/NU and Hsd nude mice, thought to have been used to passage the CWR22 xenograft, had 58 and 68 copies/100 cells, respectively. (1)

The term "XMRV specific" is misleading. The primer probe set (primer 3f-8r) was complimentary to sequences in the env region of XMRV/VP62 and ERV-2. The term XMRV encompasses a wider range of variability than is allowed for here.

The real-time PCR actually established a figure of 68 copies per 100 cells for ERV-1 (in the NuNu and Hsd mice)* and 3 copies of ERV-1 per 100 cells in the CWR22 cell line. There is no history of this primer being able to detect XMRV/VP62 env sequences at a concentration of less than 2000 copies per 100 cells. The serial dilution method used here has come in for a great deal of criticism, and the copy number estimates in mice and the early CWR22 xenografts are highly likely to be unreliable.

Primer 3f-8r using a single round PCR was unable to detect XMRV/VP62 in the later xenograft.


The following information is erroneous:

The absence of XMRV in the CWR22 tumor and early passage xenografts. Using qPCR assays, we estimated that the early xenografts contain ~1?3 XMRV env copies/100 cells (Fig. 1C), which correlated with the amount of mouse DNA in the early xenografts (0.3 ?1%; Fig. 1D), and the estimated 1 XMRV env copy/cell in the NU/NU and Hsd nude mice (Fig. 1C). (2)

This is incorrect from the initial paragraph. The provirus detected was ERV-1 and not XMRV. There is no information on copy number of XMRV below 2000 copies per 100 cells. The proviral copy number was at most 68 copies per 100 cells and not 100 copies per 100 cells above. The authors are assuming that the assay could have detected XMRV, if present, even though this primer could not detect XMRV in later xenografts when used with the single round PCR.

The following is particularly troubling.

All six of these PCR primer sets had 100% identity to the published XMRV sequences, and could amplify XMRV from control infected cells as well as PreXMRV?1. (2)

The early xenographs were not screened with XmU3f and GAGr. GAGr is the primer that could only amplify XMRV GAG. This primer was not able to amplify XMRV from later xenografts either, using the single round PCR, yet this primer was the one used to determine the analytical sensitivity of the single round PCR assay. Analytical sensitivity for a PCR assay using the same reagents and cycling conditions is primer specific. Vary the primers and the analytical and clinical sensitivities are very likely to change.

The results in Figure 2E for XMRV are not produced using primers that can 'only' amplify XMRV (XmU3f and GAGr), because that primer could not amplify XMRV from later xenografts either, hence to have a column for XMRV alone is misleading. The later xenografts were not examined for the presence of XMRV using the quantitative PCR with the 3f-8r primers. Thus we do not know whether XMRV could be detected using that assay.


Variable governing a PCR assay

There are a number of variables that govern whether a PCR assay can detect a target template in nucleic acid extracted from a biological sample, and the primer sequence is but one of them.

Absolute and relative concentrations of reagents in the reaction tube.
Choice of annealing times and temperatures.
Concentrations of oligonucleotides, primers and magnesium, salt, buffers.
Concentration of the target template.
Quality of DNA or RNA in the biological sample.
Presence of inhibitors in the biological sample or as a result of the nucleic acid extraction process.

All the above are crucial variables

Therefore the assumption that just because a PCR reaction can amplify pristine DNA or copy DNA from serial dilutions in a lab, then it can do so from nucleic acid extracted from a biological sample, is unsafe. Indeed quantitative real-time PCR is very susceptible to the presence of inhibitors, which reduces the amplification efficiency of the PCR reaction.

The calculation of copy numbers using the software used to analyse the results of quantitative real-time PCR assays is based on the assumption that the amplification efficiency of PCR is the same in all reactions considered.

The results from Paprotka (2011) confirm the importance of the concentrations of reagents in the reaction tube because the qPCR reaction using one master-mix using the 3f and 8r primer was able to detect ERV-2 in NU/NU and Hsd mice, but the same primer with a different master-mix in the single round PCR was quite unable to do so.

We also have the situation where this primer was able to detect ERV-1 in the early xenografts, where the copy number of ERV-1 was much less. This apparent paradox strongly suggests the presence of inhibitors in the DNA extracted from the mice, which affects this very short primer sequence differentially as other longer primers were able to detect ERV-2 in these mice.

There is no information regarding the sensitivity of the quantitative real-time PCR assay using the XMRV only GAG primer used to screen the lab and wild derived mice hence the results have little meaning. These wild derived mice were described as wild in the paper and this is incorrect. ERV-1 was not isolated as a whole provirus from NU/NU or Hsd mice.

Paprotka claim that their assay would have been sensitive enough to detect XMRV in prostate cancer tissue at the concentrations found by Schlaberg et al.(3) (2009) in one patient using IHC, of 1 provirus per 660 cells. The assay constructed in Schalberg et al. used different primers, different cycling conditions and had a theoretical limit of detection of 1 XMRV copy per PCR reaction tube. This is far more sensitive than the assay used in Paprotka et al. (2011), even if the lower limit was 1 copy per 100 cells. Yet this assay only detected XMRV in 25% of the people found positive using the PCR assay.

Analytical sensitivity determined by serial dilutions of pristine DNA in a lab is no measure of a PCR reactions ability to detect a target sequence extracted from a biological sample as the experience of Schlaberg et al. so graphically demonstrates.

Scientific orthodoxy would demand that the xenografts were screened using assays known to be far more sensitive than the single round assay used in Paprotka, such as the nested PCR or nested Reverse transcriptase PCR, or indeed the quantitative real-time PCR devised by Schalberg et al. These have a history of being able to detect XMRV in prostate tissue, while the assays used in Paprotka do not.

Relying on probabilistic arguments and ignoring the issues inherent in detecting target sequences from biological samples is unsafe. History shows that the ability or otherwise of a PCR assay to detect XMRV in prostate tissue depends on choice of primers, cycling conditions and the master mix. These parameters need to be optimized in order to maximize the chances of detecting XMRV if present. Current evidence would also suggest that IHC is a far more sensitive technique for detecting XMRV in prostate tissue than PCR. It is therefore puzzling why it was not used here.


Poisson distribution

As copy number falls the number of replicates needed to detect XMRV in a sample isolated from DNA, taken from a biological sample, rises dramatically according to the dictates of the Poisson distribution. The apparent detection of EndoERV-1, but not XMRV, in the early xenographs may well result from an inadequate number of replication PCR runs being undertaken. This would seem reasonable, as the early xenografts directly ancestral to the 22Rv1 cell line were found not to contain PreXMRV-2 (Figure 2E, Paprotka).

It is difficult to propose that XMRV was a result of an astronomically unlikely recombination event between two endogenous proviruses if one of the proviruses was not present in the xenografts in the passage immediately before the recombination event allegedly took place (see figure 1A). The experiment certainly needs to be repeated using the statistically appropriate number of replications.

One must now turn to the probability of such a recombination event taking place at all. The authors assume this is fact and base their calculations accordingly. They also assume extramolecular and intramolecular recombination.

"(we) assume that the number of crossovers is distributed according to a Poisson distribution"

"Assuming that crossovers can only occur in the 111 blocks of identity"

"To assign a probability of observing a given pattern we assume that the selection of template for initiation of DNA synthesis is random, the selection for the acceptor template of minus?strand DNA transfer is random, the identified ? 20?nt identity blocks share the same probability of recombination, and recombination events are independent, i.e. recombination at one block does not affect the probability of a recombination event at any other block. Given these assumptions, the probability of observing a given pattern"

"It is well established that DNA synthesis can initiate from either RNA template, and minus?strand DNA transfer can occur both inter? and intramolecularly. Hence, we hold these two variables constant, assume that the number of crossovers is distributed according to a Poisson distribution, and examine the effect of recombination frequency."

"For example, using the average of 4 crossovers described in the
literature, the probability of observing a second independently derived provirus with the same 6 crossovers is 1.3 10?12."
(The above five quotes from reference 2)​

The entire argument is based on a series of assumptions, which may or may not hold. They are also assuming that their hypothesis, that XMRV was created by their proposed crossover, is fact. This has not been established, as alternative explanations for their observations are available however unlikely the authors deem them to be. This means that the likelihood of this event happening in the first place is 1.3 10-12.

Thus we have two competing explanations. The first is that the PCR assay had insufficient clinical sensitivity to detect XMRV in low copy number for all the reasons discussed above or that XMRV was created by a series of crossovers with a likelihood of occurrence of 1.3 10?12.

The accuracy of these competing hypotheses should be tested experimentally and not determined using probabilistic arguments based on the existing preconceptions of the authors. A number of points presented as factual in the paper are clearly not, and a number of diagrams and assertions are misleading. At the very least they need correction via the standard procedures.


ERV-1, ERV-2 and the mice.

The mice tested in Paprotka were NOT from the same supplier as the mice used for xenografting to create the 22Rv1 cell line. The nude mice used to create the cell line were obtained from the Athymic Animal Facility of the Case Western, whereas the mice tested in Paprotka were from,

Taconic (NCR nude), Harlan Laboratories (Hsd nude), Charles River Laboratories (BALB/c nude, NIH?III nude, NIH?Swiss, and NU/NU nude), and Jackson Laboratory(2)

The nude mice Paprotka et al. claim are likely to have been used for in vivo passages of the xenograft (NU/NU and Hsd) are also different to those where EndoERV-1 and EndoERV-2 were shown to be integrated in Cingoz (2011). EndoERV-1 was found integrated in C57L/J and EndoERV-2 was shown integrated into DBA2J and 129X1/SvJ. All of these mice are hairy and could not have been used to create the 22Rv1 cell line.

Only a small region of EndoERV-2 was found in Hsd mice from Harlan Sprague Dawley laboratory and no trace of EndoERV-2 was found in the NU/NU mice from the Charles River laboratory (Fig. S3B and S3C, Paprotka).

EndoERV-1 was also incomplete in NU/NU mice (Fig S6A and S6A, Paprotka).

Hsd is also not a specific strain of mice but the Harlan Sprague Dawley laboratory. Therefore it is not known which strain or strains of mice were tested in Paprotka under the heading Hsd.

Mice tested in Paprotka from a different supplier as those used to create the 22Rv1 cell line.
EndoERV-1 and EndoERV-2 integrated into none nude strains not used to create 22Rv1 cell line.
No trace of EndoERV-2 found in NU/NU mice.
Incomplete EndoERV-1 found in NU/NU mice.
Only a fraction of EndoERV-2 found in Hsd mice.

There is no evidence that EndoERV-2 (PreXMRV-2) was present in the cells or the mice used for xenografting to create 22Rv1 cells. Thus, XMRV/VP62 cannot be said to have been created during the construction of the 22Rv1 cell line.


REFERENCES:

1). Paprotka et al: Recombinant Origin of the Retrovirus XMRV: http://www.sciencemag.org/content/early/...ce.1205292 10.1126/science.1205292

2). Paprotka et al: Supporting Online Material for Recombinant Origin of the Retrovirus XMRV
www.sciencemag.org/cgi/content/full/science.1205292/DC1 10.1126/science.1205292

3). Schlaberg R, Choe DJ, Brown KR, Thaker HM, Singh IR: XMRV is present in malignant prostatic epithelium and is associated with prostate cancer, especially high-grade tumors. Proc Natl Acad Sci U S A 2009, 106:16351-16356.


Screen Shot 2012-02-10 at 10.42.47.jpg

*added in edit, should have been in the OP. Sorry.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
That is Gerwyn's 'facts' I believe Jace - is that not correct? Only V99/Tango posted this on the other forum saying it was Gerwyn's 'latest' email to Mr Tuller: http://www.mecfsforums.com/index.php/topic,11360.msg131603.html#new

Not that I've read it of course. Couldn't understand half of it though it could be very accurate and a worthy effort for all I know. I do wonder what Mr Tuller will make of it though... Seems an odd choice of recipient for such a thing and I hope nothing ill comes from it.
 

jace

Off the fence
Messages
856
Location
England
Hi Fire, as you know, I work with G and V on projects, I do the housekeeping and they have the brains. Gerwyn has had an extended correspondence with Tuller, I believe this is in response to Tuller's second or third reply.

Here's the basic points to make it easier to understand

  1. Paprotka et al used the label XMRV to mean the XMRV clone VP62, the 'Silverman Contaminant'
  2. The primers they used were set only to discover VP62 or parts thereof
  3. The primers were too specific to discover anything else
  4. They made a point of the fact that their primer sets had 100% the same identitity as VP62, their 'XMRV'
  5. The early tests did not use the primer that can amplify GAG (a portion of the RNA) but the later tests were with GAGr primer and those tests were negative. Despite this they used that primer to determine how good their tests were overall. There is no evidence in the paper that XMRV could be detected at all with that assay (test)
  6. There are a lot of different things that you can alter in a PCR test. Alter one and you change a variable, and if you get a different result than others having altered your test, well then it could be just because you altered your test. Think of a chocolate cake recipe - if you leave out the baking powder, you will get a heavier cake. If you change the chocolate for vanilla, you won't get a chocolate cake at all (boo). The original recipe writer was not wrong, it is rather that you changed the recipe, and that could be why your results are different.
  7. Though they say in Paprotka et al that their PCR was the same as the PCR used by Schlaberg et al, it was not, in fact. It was far less sensitive.
  8. Paprotka et al relies on probabilities, they think something was so. That does not make it so. The evidence is not there.
  9. The group of short quotes from Paprotka use the word 'assume' five times. Assume makes an ass out of U and me
  10. Their basic theory is that XMRV was made in a lab in the early 1990's, in the cell line 22rv1. Our piece lays out clearly that the mice used in the creation of the 22rv1 cell line are not the same as the mice used in Paprotka et al
 

RustyJ

Contaminated Cell Line 'RustyJ'
Messages
1,200
Location
Mackay, Aust
Well, I'm not concerned about opinions, but the facts speak for themselves once you drill down into the data.

Absence of evidence in Paprotka et al. (2011).


Paprotka et al. (2011) argue that XMRV was created during the creation of the 22Rv1 cell line from the genomic recombination of the genomic RNA of two ERVs they call PreXMRV-1 and PreXMRV-2 co-packaged into the same virion.

Paprotka led by Dr John Coffin used subjective labels to describe these viruses. We will use slightly less subjective labels. Hereafter the ERVs will be called EndoERV-1 and EndoERV-2. XMRV will be given the objective label XMRV/VP62.


XmU3f and GAGr primers

"To quantify the amount of XMRV DNA in the CWR22 xenografts, we developed a real-time PCR primer-probe set that specifically detected XMRV env and excluded murine endogenous proviruses present in BALB/c and NIH3T3 genomic DNA (Fig. 1C). We used quantitative PCR of 22Rv1 DNA to estimate 20 proviruses/cell and used the 22Rv1 DNA to generate a standard curve. The CWR22 xenografts had significantly fewer copies of XMRV env (<13 copies/100 cells) compared to the 22Rv1 cells (2000 copies/100 cells). The CWR-R1 cell line had ~3000 copies/100 cells, and the NU/NU and Hsd nude mice, thought to have been used to passage the CWR22 xenograft, had 58 and 68 copies/100 cells, respectively. (1)

The term "XMRV specific" is misleading. The primer probe set (primer 3f-8r) was complimentary to sequences in the env region of XMRV/VP62 and EndoERV-2. The term XMRV encompasses a wider range of variability than is allowed for here.

The real-time PCR actually established a figure of 68 copies per 100 cells for EndoERV-2 and 3 copies of EndoERV-2 per 100 cells in the CWR22 cell line. There is no history of this primer being able to detect XMRV/VP62 env sequences at a concentration of less than 2000 copies per 100 cells. The serial dilution method used here has come in for a great deal of criticism, and the copy number estimates in mice and the early CWR22 xenografts are highly likely to be unreliable.

Primer 3f-8r using a single round PCR was unable to detect XMRV/VP62 in the later xenograft.


The following information is erroneous:

The absence of XMRV in the CWR22 tumor and early passage xenografts. Using qPCR assays, we estimated that the early xenografts contain ~1?3 XMRV env copies/100 cells (Fig. 1C), which correlated with the amount of mouse DNA in the early xenografts (0.3 ?1%; Fig. 1D), and the estimated 1 XMRV env copy/cell in the NU/NU and Hsd nude mice (Fig. 1C). (2)

This is incorrect from the initial paragraph. The provirus detected was EndoERV-1 and not XMRV. There is no information on copy number of XMRV below 2000 copies per 100 cells. The proviral copy number was at most 68 copies per 100 cells and not 100 copies per 100 cells above. The authors are assuming that the assay could have detected XMRV, if present, even though this primer could not detect XMRV in later xenografts when used with the single round PCR.

The following is particularly troubling.

All six of these PCR primer sets had 100% identity to the published XMRV sequences, and could amplify XMRV from control infected cells as well as PreXMRV?1. (2)

The early xenographs were not screened with XmU3f and GAGr. GAGr is the primer that could only amplify XMRV GAG. This primer was not able to amplify XMRV from later xenografts either, using the single round PCR, yet this primer was the one used to determine the analytical sensitivity of the single round PCR assay. Analytical sensitivity for a PCR assay using the same reagents and cycling conditions is primer specific. Vary the primers and the analytical and clinical sensitivities are very likely to change.

The results in Figure 2E for XMRV are not produced using primers that can 'only' amplify XMRV (XmU3f and GAGr), because that primer could not amplify XMRV from later xenografts either, hence to have a column for XMRV alone is misleading. The later xenografts were not examined for the presence of XMRV using the quantitative PCR with the 3f-8r primers. Thus we do not know whether XMRV could be detected using that assay.


Variable governing a PCR assay

There are a number of variables that govern whether a PCR assay can detect a target template in nucleic acid extracted from a biological sample, and the primer sequence is but one of them.

Absolute and relative concentrations of reagents in the reaction tube.
Choice of annealing times and temperatures.
Concentrations of oligonucleotides, primers and magnesium, salt, buffers.
Concentration of the target template.
Quality of DNA or RNA in the biological sample.
Presence of inhibitors in the biological sample or as a result of the nucleic acid extraction process.

All the above are crucial variables

Therefore the assumption that just because a PCR reaction can amplify pristine DNA or copy DNA from serial dilutions in a lab, then it can do so from nucleic acid extracted from a biological sample, is unsafe. Indeed quantitative real-time PCR is very susceptible to the presence of inhibitors, which reduces the amplification efficiency of the PCR reaction.

The calculation of copy numbers using the software used to analyse the results of quantitative real-time PCR assays is based on the assumption that the amplification efficiency of PCR is the same in all reactions considered.

The results from Paprotka (2011) confirm the importance of the concentrations of reagents in the reaction tube because the qPCR reaction using one master-mix using the 3f and 8r primer was able to detect EndoERV-2 in NU/NU and Hsd mice, but the same primer with a different master-mix in the single round PCR was quite unable to do so.

We also have the situation where this primer was able to detect EndoERV-1 in the early xenografts, where the copy number of EndoERV-1 was much less. This apparent paradox strongly suggests the presence of inhibitors in the DNA extracted from the mice, which affects this very short primer sequence differentially as other longer primers were able to detect EndoERV-2 in these mice.

There is no information regarding the sensitivity of the quantitative real-time PCR assay using the XMRV only GAG primer used to screen the lab and wild derived mice hence the results have little meaning. These wild derived mice were described as wild in the paper and this is incorrect. EndoERV-1 was not isolated as a whole provirus from NU/NU or Hsd mice.

Paprotka claim that their assay would have been sensitive enough to detect XMRV in prostate cancer tissue at the concentrations found by Schlaberg et al.(3) (2009) in one patient using IHC, of 1 provirus per 660 cells. The assay constructed in Schalberg et al. used different primers, different cycling conditions and had a theoretical limit of detection of 1 XMRV copy per PCR reaction tube. This is far more sensitive than the assay used in Paprotka et al. (2011), even if the lower limit was 1 copy per 100 cells. Yet this assay only detected XMRV in 25% of the people found positive using the PCR assay.

Analytical sensitivity determined by serial dilutions of pristine DNA in a lab is no measure of a PCR reactions ability to detect a target sequence extracted from a biological sample as the experience of Schlaberg et al. so graphically demonstrates.

Scientific orthodoxy would demand that the xenografts were screened using assays known to be far more sensitive than the single round assay used in Paprotka, such as the nested PCR or nested Reverse transcriptase PCR, or indeed the quantitative real-time PCR devised by Schalberg et al. These have a history of being able to detect XMRV in prostate tissue, while the assays used in Paprotka do not.

Relying on probabilistic arguments and ignoring the issues inherent in detecting target sequences from biological samples is unsafe. History shows that the ability or otherwise of a PCR assay to detect XMRV in prostate tissue depends on choice of primers, cycling conditions and the master mix. These parameters need to be optimized in order to maximize the chances of detecting XMRV if present. Current evidence would also suggest that IHC is a far more sensitive technique for detecting XMRV in prostate tissue than PCR. It is therefore puzzling why it was not used here.


Poisson distribution

As copy number falls the number of replicates needed to detect XMRV in a sample isolated from DNA, taken from a biological sample, rises dramatically according to the dictates of the Poisson distribution. The apparent detection of EndoERV-1, but not XMRV, in the early xenographs may well result from an inadequate number of replication PCR runs being undertaken. This would seem reasonable, as the early xenografts directly ancestral to the 22Rv1 cell line were found not to contain PreXMRV-2 (Figure 2E, Paprotka).

It is difficult to propose that XMRV was a result of an astronomically unlikely recombination event between two endogenous proviruses if one of the proviruses was not present in the xenografts in the passage immediately before the recombination event allegedly took place (see figure 1A). The experiment certainly needs to be repeated using the statistically appropriate number of replications.

One must now turn to the probability of such a recombination event taking place at all. The authors assume this is fact and base their calculations accordingly. They also assume extramolecular and intramolecular recombination.

"(we) assume that the number of crossovers is distributed according to a Poisson distribution"

"Assuming that crossovers can only occur in the 111 blocks of identity"

"To assign a probability of observing a given pattern we assume that the selection of template for initiation of DNA synthesis is random, the selection for the acceptor template of minus?strand DNA transfer is random, the identified ? 20?nt identity blocks share the same probability of recombination, and recombination events are independent, i.e. recombination at one block does not affect the probability of a recombination event at any other block. Given these assumptions, the probability of observing a given pattern"

"It is well established that DNA synthesis can initiate from either RNA template, and minus?strand DNA transfer can occur both inter? and intramolecularly. Hence, we hold these two variables constant, assume that the number of crossovers is distributed according to a Poisson distribution, and examine the effect of recombination frequency."

"For example, using the average of 4 crossovers described in the
literature, the probability of observing a second independently derived provirus with the same 6 crossovers is 1.3 10?12."
(The above five quotes from reference 2)​

The entire argument is based on a series of assumptions, which may or may not hold. They are also assuming that their hypothesis, that XMRV was created by their proposed crossover, is fact. This has not been established, as alternative explanations for their observations are available however unlikely the authors deem them to be. This means that the likelihood of this event happening in the first place is 1.3 10-12.

Thus we have two competing explanations. The first is that the PCR assay had insufficient clinical sensitivity to detect XMRV in low copy number for all the reasons discussed above or that XMRV was created by a series of crossovers with a likelihood of occurrence of 1.3 10?12.

The accuracy of these competing hypotheses should be tested experimentally and not determined using probabilistic arguments based on the existing preconceptions of the authors. A number of points presented as factual in the paper are clearly not, and a number of diagrams and assertions are misleading. At the very least they need correction via the standard procedures.


EndoERV-1, EndoERV-2 and the mice.

The mice tested in Paprotka were NOT from the same supplier as the mice used for xenografting to create the 22Rv1 cell line. The nude mice used to create the cell line were obtained from the Athymic Animal Facility of the Case Western, whereas the mice tested in Paprotka were from,

Taconic (NCR nude), Harlan Laboratories (Hsd nude), Charles River Laboratories (BALB/c nude, NIH?III nude, NIH?Swiss, and NU/NU nude), and Jackson Laboratory(2)

The nude mice Paprotka et al. claim are likely to have been used for in vivo passages of the xenograft (NU/NU and Hsd) are also different to those where EndoERV-1 and EndoERV-2 were shown to be integrated in Cingoz (2011). EndoERV-1 was found integrated in C57L/J and EndoERV-2 was shown integrated into DBA2J and 129X1/SvJ. All of these mice are hairy and could not have been used to create the 22Rv1 cell line.

Only a small region of EndoERV-2 was found in Hsd mice from Harlan Sprague Dawley laboratory and no trace of EndoERV-2 was found in the NU/NU mice from the Charles River laboratory (Fig. S3B and S3C, Paprotka).

EndoERV-1 was also incomplete in NU/NU mice (Fig S6A and S6A, Paprotka).

Hsd is also not a specific strain of mice but the Harlan Sprague Dawley laboratory. Therefore it is not known which strain or strains of mice were tested in Paprotka under the heading Hsd.

Mice tested in Paprotka from a different supplier as those used to create the 22Rv1 cell line.
EndoERV-1 and EndoERV-2 integrated into none nude strains not used to create 22Rv1 cell line.
No trace of EndoERV-2 found in NU/NU mice.
Incomplete EndoERV-1 found in NU/NU mice.
Only a fraction of EndoERV-2 found in Hsd mice.

There is no evidence that EndoERV-2 (PreXMRV-2) was present in the cells or the mice used for xenografting to create 22Rv1 cells. Thus, XMRV/VP62 cannot be said to have been created during the construction of the 22Rv1 cell line.


REFERENCES:

1). Paprotka et al: Recombinant Origin of the Retrovirus XMRV: http://www.sciencemag.org/content/early/...ce.1205292 10.1126/science.1205292

2). Paprotka et al: Supporting Online Material for Recombinant Origin of the Retrovirus XMRV
www.sciencemag.org/cgi/content/full/science.1205292/DC1 10.1126/science.1205292

3). Schlaberg R, Choe DJ, Brown KR, Thaker HM, Singh IR: XMRV is present in malignant prostatic epithelium and is associated with prostate cancer, especially high-grade tumors. Proc Natl Acad Sci U S A 2009, 106:16351-16356.


View attachment 6645

And yet one of our esteemed posters suggests the Paprotka study was a very good one, unlike the Lombardi study.