• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Discordancy and the BWG 3

jace

Off the fence
Messages
856
Location
England
Another possible explanation for the discrepant BWG results (that has probably been mentioned, but I don't recall seeing) is a failure in the coding/decoding of samples. Because the entire post hoc analysis depends on accurate coding, a failure here would produce meaningless results that might appear meaningfully damaging to a hypothesis, even if the remainder of the experimental process is sound.

I've always found it odd that such a crucial element of these studies is generally glossed over. There seems to be an assumption that the coding is competent . Yet, we know nothing about how it was done, who did it, and how its integrity was ensured and secured. As far as I can tell from the supplemental materials, the final coding was done at BSRI. Call me skeptical, but all the political fanfare they engaged in around publication didn't exactly garner much trust with me.

There are a few peculiarities in the BWG data that certainly don't prove anything but seem to line up with this possibility. Imagine, for the sake of argument, that the positives found within each panel were actually from positive patients but mis-coded as coming from a random selection of patients:
Both Ruscetti and WPI each found 22 positives/intermediates by serology (where each had 30 positive patient samples and 72 total samples). This would represent 73% detection rate among positives and 31% overall. Strange that whatever they were detecting was being found at the same overall percentage between labs. Even if they weren't finding "XMRV," and even if the coding was correct, why was there no concern given to the fact that they were finding something at equal percentages?

Ruscetti found 9 non-control positives by culture (where they had 15 non-control positive patient samples). This would represent 60% detection rate among non-control positives and 30% of non-control samples overall (70% and 40% respectively if positive controls are included). So we see that the aggregate detection percentages concord roughly with serology though the coding-dependent sources do not. (Another tangential question here: why weren't the the results of any of the positive cultures sequenced? This seems like an appalling lack of investigative curiosity.)

In each PCR panel, the WPI had the exact same number of non-control positives as they did control negatives. This could be explained by mis-coding.

In summary: If looked at in aggregate and without concern for the patient source of individual samples, the serology and culture findings have positivity percentages that concord with not only one another but also with what could reasonably be expected given the underlying count of positive and negative samples. It is only when these findings are mapped to the underlying sources, a process entirely dependent upon the integrity of coding, that this concordance breaks down.

It is on the basis of this latter sample-source discordance that the entirety of the paper's analysis was conducted and upon which the results are deemed damaging to the HGRV hypothesis, yet there is not an ounce of data about the coding process, the essential part of the study that transforms potentially meaningful aggregate results into apparently meaningless source results. This hypothesis also has the advantage that the results in BWG 3 would make sense in the light of results produced by BWG 1 and 2. Currently the results in BWG 3 make no sense when viewed in the light of the results produced by BWG1 and 2
 

Esther12

Senior Member
Messages
13,774
The coding is a pretty basic issue, and something that would take incredible incompetence or really clear-cut intentional fraud to get wrong.

Anything could go wrong... but this is no more likely than the possibility that the researchers forgot what they were doing, and ended up testing for EBV instead.

Maybe they accidentally got all the blood samples from cats? When we start reaching for those sorts of explanations to justify the results and the possibility of Mikovit's/Ruscetti's testing being able to distinguish between control and CFS samples, it shows how compelling the data from the BWG study actually was.

re the similar detection rates - we don't know what the expectations of the researchers were, or how this would affect their testing procedures. The whole point of the blinding was to remove concerns about this.
 

RRM

Messages
94
Concerning the PCR-only results:

This cannot be reasonably explained by mis-coding. After all, all labs sequenced their positive PCR results. All positive controls (including the 3 out of 5 WPI controls that tested positive) that were spiked with "22Rv1 XMRV" exactly matched the 22Rv1 XMRV sequence. However, the two pedigreed negatives that tested positive in WPI's lab, differed from "22Rv1 XMRV" by two and one base, respectively. This is consistent with getting your samples contaminated with 22Rv1 after some culturing, or with having (again) extremely bad luck.

Concerning detection rates:

Around 30% is really what one would expect in the case of contamination. After all, when you are testing your assays in-house, you cannot have a detection rate that is too high (because of the healthy controls) nor one that is too low (or you'll need too many "retests" on your patient samples). With 30%, about 65% of patients will turn up as positive after three tests, and that is about enough (of course, as long as you don't do the same with the controls).

Concerning not-sequencing culture results:

Although I was also under the impression that Lombardi et al. did originally "just" perform PCR after the culturing process, the BWG paper mentions that Ruscetti chose to use the Lombardi et al. western blot assay (now infamous for the use of 5-aza) after culturing. This means that positive results could not be sequenced.
 

asleep

Senior Member
Messages
184
(In full disclosure, the original post was written by me on another forum).

Concerning the PCR-only results:

This cannot be reasonably explained by mis-coding. After all, all labs sequenced their positive PCR results. All positive controls (including the 3 out of 5 WPI controls that tested positive) that were spiked with "22Rv1 XMRV" exactly matched the 22Rv1 XMRV sequence. However, the two pedigreed negatives that tested positive in WPI's lab, differed from "22Rv1 XMRV" by two and one base, respectively. This is consistent with getting your samples contaminated with 22Rv1 after some culturing, or with having (again) extremely bad luck.

Of course this raises a larger question: Why were positives from non-spiked samples--sequenced as distinct from "22Rv1 XMRV"--simply written off as "contamination"?

If these were sequencing errors, then the original issue of potential mis-coding still applies. If they were not sequencing errors, then we are left with the finding of non-22Rv1 HGRV sequences from human samples being outright ignored.

As for this being "consistent with getting your samples contaminated with 22Rv1 after some culturing," this very BWG study actually provides pretty strong evidence against that exact claim. From the SOM (emphasis mine):

XMRV negative non co-cultured LNCaP cells were subcultured in the same hood following the subculturing of the cells co-cultured with samples spiked with 22Rv1 cells. They became positive after 19 days confirming the possibility of false positives due to spread of 22Rv1 virus. This is not an issue in the original culture experiments as no XMRV infected cell line was cultured in the same laboratories as patient cells. The virus that spread to negative LNCaP cultures was sequenced in the gag region as above and found to be identical to the spiked virus.

Hence, they directly observed the effects of this exact rationalization you offer (contamination from 22Rv1 after culturing) and found that all such sequences were identical to 22Rv1. No variation was observed in the gag region from cultured 22Rv1 contamination, which is exactly the same region where WPI found 3 distinct and unique sequences from human samples.

If these sequences are therefore not from 22Rv1 contamination, then I'm not sure what "bad luck" you might be referring to, unless you mean being burdened with a assay capable of occasionally detecting diverse HGRVs in an oppressive political climate.

Concerning detection rates:

Around 30% is really what one would expect in the case of contamination. After all, when you are testing your assays in-house, you cannot have a detection rate that is too high (because of the healthy controls) nor one that is too low (or you'll need too many "retests" on your patient samples). With 30%, about 65% of patients will turn up as positive after three tests, and that is about enough (of course, as long as you don't do the same with the controls).

You haven't provided any evidence for your claim about 30% being expected, unfortunately. (The rest of what you say here just extrapolates from this unsupported claim.)

Consider previous "contamination" papers: Robinson et al found purported "XMRV contamination" in 21/437 = 4.8% of samples; Oakes et al found purported contamination in 21/148 = 14% of samples; I recall a few 0/0 studies that found evidence of XMRV in a very small percentage (<< 30%) of samples that was then written of as contamination.

So I don't see why it's obvious that two distinct labs would have identical rates of contamination that differ from most prior observed rates of contamination (esp. when the prior rates were all distinct from one another, implying consistency in rates is unexpected).

Furthermore, this 30% was observed consistently not just across two distinct labs (WPI, Ruscetti), but also across distinct tests: serology and culture (for proteins). Even if one believes that the culture results are due to contamination with protein-producing virions, this doesn't explain the serology. Since a contaminant essentially cannot elicit a novel antibody response in extracted blood, the 30% in both serology panels would have to represent either contamination with cross-reacting antibodies (a distinct type of contamination from the culture contamination, yet at the same rate) or the prior existence of said cross-reacting antibodies in the human samples themselves (again, at a strangely consistent rate). The former explanation further stretches credulity about this consistent rate being due to contamination (so many darn coincidences!), while the latter explanation raises the question of why the other serology labs didn't detect this pre-existing cross-reaction.

Concerning not-sequencing culture results:

Although I was also under the impression that Lombardi et al. did originally "just" perform PCR after the culturing process, the BWG paper mentions that Ruscetti chose to use the Lombardi et al. western blot assay (now infamous for the use of 5-aza) after culturing. This means that positive results could not be sequenced.

This was not really one of my main points originally, more of a parenthetical observation. Nonetheless, you are right about the use of WB (which I had forgotten too) preventing this. That said, this observation about lack of investigative curiosity is still pertinent with respect to the non-22Rv1 PCR positives you bring up in your first point. Rather, it's worse than mere lack of curiosity: they did not simply forgo sequencing, instead they actually performed sequencing and just ignored the results (distinct gag sequences) that didn't mesh with their conclusions.
 

RRM

Messages
94
Of course this raises a larger question: Why were positives from non-spiked samples--sequenced as distinct from "22Rv1 XMRV"--simply written off as "contamination"?

There are not one but two independent and convincing reasons for this:

1. They were not ancenstral or "distinct" but phylogenetically derived from 22Rv1/VP62. Slide 27 of the webinar on the Simmons et al. paper shows this.

2. The healthy controls that were positive were positive for XMRV in plasma. These healthy controls were very well pedigreed as "plasma negative" by PCR, serology AND culture by both Mikovits and Ruscetti (as well as a couple of other labs)

As for this being "consistent with getting your samples contaminated with 22Rv1 after some culturing," this very BWG study actually provides pretty strong evidence against that exact claim. From the SOM (emphasis mine)

[...]

Hence, they directly observed the effects of this exact rationalization you offer (contamination from 22Rv1 after culturing) and found that all such sequences were identical to 22Rv1.

That quote is actually supportive of my position.

The important thing to bear in mind is that no one actually thinks these (as you call it) "disctinct" PCR sequences were the result of contamination from the spiked samples that were supplied to WPI by the BWG. If the positive labs suffer from contamination, it stands to reason that this was already a problem in their lab before the BWG sent them samples. Having contamination for two years is something else than having a sequence in your lab for three weeks or so. Therefore, it is to be expected that XMRV contamination from WPI/Ruscetti will be close to known sequences but not necessarily exactly the same.

Now, your quote from the BWG paper actually shows that one of the original scientists (Ruscetti), that both had supposedly "never" suffered from XMRV contamination, contaminated his lab with XMRV, apparently THE VERY FIRST TIME that someone provided him with an independent experiment to check on this.

In short, your quote from the paper just proves that the original scientists cannot control for contamination as well as they had us led to believe.

If these sequences are therefore not from 22Rv1 contamination, then I'm not sure what "bad luck" you might be referring to,

Remember that I was answering to your "messed-up-blinding" hypothesis. Under that hypothesis, the two positive healthy controls would actually be the two spiked controls that were deemed as negative by WPI.

Now, it would require a substantial amount of bad luck to then have the three exact 22Rv1 sequences ending up in the spiked control panel, and the "not-quite exact" 22Rv1 samples in the pedigreed control panel. After all, you would expect all five sequences to be the exact same as reported by the other labs, or to have one of the "not quite exact" 22Rv1 sequences in the spiked control panel. As it is, the two "not-quite exact" 22Rv1 samples were the only two that were not exactly the same as all other spiked control samples (33 for plasma alone). The same applies to a possible sequencing error. All 33 other control samples were without errors, and therefore it would require a lot of bad luck if the two positive controls that were actually miscoded were also the two that suffered from a sequencing error.


Thus, it's not something that miscoding can reasonably explain.


You haven't provided any evidence for your claim about 30% being expected, unfortunately.
Well, it's funny how you say this. Actually, I have argued about 30% being the "hotspot" for self-delusion in this context before. I hope you'll take my word for it, because I am not sure where I did, but I did so in the context of the VipDx lab results that were also along the lines of 30-35%. The examples of Oakes et al. and Robison at al. have no merit in this discussion as this does not involve experiments were possible confirmation bias is an issue.

As it is, inference is also evidence and I merely intended to show that the data does not specifically support your hypothesis of miscoding (because it equally supports the alternative hypothesis) without making my post excessively long.

However, since you insist, there is other evidence in support of my hypothesis, both from the culture and serology experiments.

First, for the culture test, Ruscetti cut down his culturing protocol from 42 to 19 days (Mikovits from 42 to 21 although she didn't finish). This specifically support my explanation that he tweaked his assay to get to a certain positivity rate. Otherwise, even if his techniques had gotten better, it still stands to reason to keep the protocol up to 42 days, to "catch" as many positives as possible. Therefore, whether his results are true or false, he basically chose a positivity rate beforehand.

Second, if you read the description of the serology test in the BWG paper, you'll notice that this test was not really an "abolute" test, so to say. Positivity was determined through comparison with control samples (and much more so than in other experiments). This also lends itself perfectly to determining a "cut off" signal that lets you effectively choose the amount of positive you'll detect beforehand.

After the BWG fiasco, I would argue that the positivity rate for the Likin study will drop down somewhat. People tend to be a little more careful after such an experience and they probably included some more controls into their pre-study calibration experiments. Anything between 20-30% would not suprise me.

Rather, it's worse than mere lack of curiosity: they did not simply forgo sequencing, instead they actually performed sequencing and just ignored the results (distinct gag sequences) that didn't mesh with their conclusions

Nope.

However, it really boils down to the first point at the beginning of my post and as I said there, they even did a extensive phylogenetic analysis on those sequences. I might add that in the case of the Lo et al. study, this was something that other researchers had to do because Lo et al. didn't do this themselvses, for instance.

On a final note: Experimental checking for miscoding in the BWG samples by Mikovits/Ruscetti is actually quite simple, if you still believe in the validity of your argument. If they still have some of the experimental material (and I am quite sure that they do), Mikovits/Ruscetti could select one or two or all of the samples (depending on the money they want to spend on something as this) and let a lab they trust perform a STR analysis on this one two or more sample(s) and the corresponding patient sample from their own repository. After all (and unlike the Lipkin study), WPI provided 10 of these patients themselves and it is therefore quite easy to check these samples for matching DNA.

Would you also regard this as an "appalling lack of investigative curiosity" given your concerns?