• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

anyone going after the defreitas retrovirus??

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Thanks RRM, that is the only evidence I am aware of. I don't think it's accurate to say that this means the patient and control samples were "handled differently".

They were collected from different sources, at different times, but in the same states (therefore in geographically diverse locations), and the control samples were drawn in 2004-2007 so they too must have been frozen. Whether the control samples were also stored in the WPI's repository is not stated. Whether the collection procedures were identical is not stated. But they may well have been, one would presume that efforts were made to ensure that they were, and so the evidence of these statements in itself is therefore not evidence establishing as "fact" that the samples were "handled differently"

When they say in the supporting material that "Samples were prepared within 6 hours of blood draw and frozen immediately in ‐800C or liquid nitrogen depending upon the sample type" that is the part of the [collection] process that might be said to constitute "handling differently", but it is not stated here that the controls and patient samples were handled any differently in this respect prior to freezing. So again, while it's possible that the samples may have been collected in a different way, before freezing, that is not stated here.

As I pointed out when I reviewed Singh's paper after Barb commented on it, Singh emphasises that the type of difference in source of samples that we are discussing here applied to all the XMRV research, including the subsequent negative studies (prior to her own), and was considered standard practice. Most of the studies I have seen (McClure's included) took the patient samples from some bank of ME/CFS patient samples (in McClure's case, interestingly, a bank of samples previously used for some previously-unknown research which tested ME/CFS samples for another retrovirus some years ago, which raises interesting questions) and compared them with banked (frozen) samples from controls. This was not considered to be an important or confounding issue; it was normal practice - so it's misleading to present it as if it were not.

I am surprised to learn that this unscientific approach and the assumptions it entails is not and was not unusual practice in this kind of research, at least according to what I have read. I think it would be a good lesson to learn to say that, in future, when patient samples are drawn, matched control samples should always be drawn in parallel, and banked together (if they are being banked). There should be no such thing as a bank of samples that is not matched with control samples in the bank - that is potentially a worthless bank. So that ought to be standard practice: that is one of the lessons that everyone could learn from this. In future, all samples should be matched with controls right from the point of collection, and handled identically. And this is what I have been pointing out in my post above and elsewhere: learn the lessons from this: it is not good enough to use it as an excuse to discredit someone you don't like because you see them as a bit of a maverick, and not good enough to assume that this was a sloppy error rather than something systematic that everyone does routinely (including Robin Weiss when the same thing happened to him) - if there was an experimental flaw, you should figure out exactly how things went wrong, so that everyone can avoid any potential mistakes in the future.

So I do think it is extremely important to point out that what you are describing in shorthand as samples being "handled differently" is in fact a difference that is standard in nearly all research of this kind. The raw statement of "handled differently" suggests or implies bad practice, and that could be an unfortunate spin, because it is not the handling in the experiment that your quotes refer to, but the collection of samples, and indeed there's no evidence here that the way those samples were handled during collection was different, other than taking place at different times.

Finally, and crucially, on careful analysis there is still no real answer in the differences you've highlighted which would give a known explanation for the kind of systematic contamination required to explain the Lombardi et al results. The WPI repository itself contained frozen samples, and these samples were frozen prior to arriving at the lab (within 6 hours of blood draw) and the samples were drawn from geographically diverse locations. Provided that the controls and patient samples were thawed at the same time and in the same place, this seems to give no opportunity, under known processes, for systematic contamination. Can those frozen samples be contaminated while frozen? Would this not require that XMRV can survive at -800C? Are you suggesting that may be the case? But I may well be missing something: could you perhaps explain where the window for systematic contamination lies, based on the evidence you've quoted, and how the evidence you've presented proves it as a "fact" that the patient and control samples were "handled differently"?
 

barbc56

Senior Member
Messages
3,657
Whatever the explanations for the DeFreitas and Mikovits findings, one other point seems most important.

Mark, how can you say that you were only referring to the DeFreitas study when you make a statement like the above which at the very least is confusing to readers.

I don't think it's accurate to say that this means the patient and control samples were "handled differently".

I guess it's a matter of semantics. To me, the samples were handled differently.


Can those frozen samples be contaminated while frozen?

Whether they can or not, I would think is beside the point. The contamination would occur once the samples were thawed in a lab that is contaminated.


In the absence of such controls, all such scenarios would be trivial to manipulate, and one would not even have to be a scientist to do so.

This is getting dangerously close to an unlikely conspiracy theory, IMHO

Barb C.:>)
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Mark, how can you say that you were only referring to the DeFreitas study when you make a statement like the above which at the very least is confusing to readers.
My mention of Mikovits that you quoted was made after you had commented and wrongly responded to my comments as if they were about Mikovits' research, when in fact my earlier comments were about DeFreitas' and Weiss' research and about the lessons we might learn from this history in relation to Mikovits. Your response failed to recognise that I was talking about all 3 (and more such cases too), and you responded referring only to Mikovits; that's what I was pointing out there.

I guess it's a matter of semantics. To me, the samples were handled differently.
OK then, you can put it that way, but in that case, all samples in all experiments are handled differently (since this aspect of the procedure Mikovits followed was standard). In your terms, all experiments are invalid because the samples are always handled differently - is that your view?

Whether they can or not, I would think is beside the point. The contamination would occur once the samples were thawed in a lab that is contaminated.
Do you know where they were thawed? Do you know whether the patient samples were thawed alongside the control samples, which were also frozen? No. So it is not beside the point: the point is that the fact they came from the WPI repository is not, as claimed, evidence of systematic difference in handling which can explain systematic contamination.

This is getting dangerously close to an unlikely conspiracy theory, IMHO
Speaking slightly facetiously, what I highlighted is technically not a conspiracy theory because it only requires one person to be involved.

I pointed out that it would be trivial, in the protocols used for this "blind challenge" testing, for one individual, who is not a scientist, to manipulate the results and produce a null result. It is entirely possible, and quite easy, for anyone who can gain access to the samples, at any point, to randomly mix up the samples in such a way that the results from patients and controls will be the same. And you may think that is dangerously close to a conspiracy theory, but in point of fact what I said is actually true, and there's no getting away from that I'm afraid. It's possible, and fairly trivial, to do things differently, which would be a good idea and the only way to eliminate that argument. Think of it as a sceptic's challenge to scientific method which has the potential to improve scientific methodology and eliminate wiggle room for conspiracy theories.

You may think that it's a safe assumption to completely rule out the possibility that there is ever anyone in the world who would ever intentionally corrupt the work of scientists. When we're talking about a multi-trillion-dollar question, I think it's reasonable to query how safe that assumption is, and many people would agree with me.

Assumptions are the enemy of good science. Assuming that it doesn't matter that samples were collected at slightly different times, so long as they have been frozen in the same way at -800C, 6 hours after collection in the same area, and are all treated in the same way at the same time when you start to thaw them out - that may indeed be an unsafe assumption, as you and RRM are asserting. If so, then most other scientists doing similar work will have to change their protocols, and as I've said, it's a good idea to do so and eliminate as many assumptions as possible when doing science.

Similarly, scientists used to assume that everyone will just trust their integrity and that they don't have to publicly provide the evidence they used to reach their conclusions. But ClimateGate has taught us all that this assumption is no longer safe. We have all now learned that a sceptical public understands that if the only evidence that we should rely on to make decisions in this life is scientific evidence, and such evidence is being used to run all our lives and determine our beliefs, then we must demand that scientists must show everyone all their evidence - not just their conclusions - in order for the public to take their word for anything. Otherwise, we are not a scientific and sceptical public, but a religious congregation under a scientific priesthood. And if the scientists have made assumptions which may be unsafe, then the public will notice them and highlight them, and they won't believe what the scientists say unless they prove it to them, with evidence, and with data. If scientists can't prove that a blind test can't possibly have been manipulated by anyone, then room for reasonable doubt will unfortunately remain, because different people are sceptical about different things, and they all require proof in order to be satisfied.
 

barbc56

Senior Member
Messages
3,657
My mention of Mikovits that you quoted was made after you had commented and wrongly responded to my comments as if they were about Mikovits' research, when in fact my earlier comments were about DeFreitas' and Weiss' research and about the lessons we might learn from this history in relation to Mikovits. Your response failed to recognise that I was talking about all 3 (and more such cases too), and you responded referring only to Mikovits; that's what I was pointing out there.

You absolutely misinterpreted what I wrote.


I think the following quote applies to our posts to one another.

"I know that you believe you understand what you think I said, but I'm not sure you realize that what you heard, is not what I meant."
Robert McCloskey quotes.

Just trying to lighten things up a bit.;)

Barb C. :>)
 

RRM

Messages
94
Mark said:
If so, then most other scientists doing similar work will have to change their protocols, and as I've said, it's a good idea to do so and eliminate as many assumptions as possible when doing science
The point is that they have. For the BWG and Lipkin studies, a more strict collection protocol has been used than the ones used for the Lombardi and Lo studies, exactly because it was thought that different collection times, methods and/or protocols may be (partially) responsible for the discrepant results.

To reiterate what Coffin and Cingoz concluded about this:

Coffin/Cingoz said:
extreme measures are required to avoid false
Coffin/Cingoz said:
associations of mouse viruses with disease, including [...]use of controls that are exactly contemporaneous to the cases, and obtained by precisely the same methods using the same materials and reagents. As a few recent papers indicate [30, 32, 33], these conditions are not easy to achieve, but only laboratories that do so can make credible claims to the discovery of new human infections.

It's also worth pointing out that scientific methodology should be put into its proper context. In some cases, measuring things to one billionth of a meter can be absolutely necessary, and in other cases it would be a waste of money and resources. The same applies here. In cases where people are claiming to have found "stuff" at (about) the limit of detection and this stuff is also identical or almost identical to known lab contaminants, you will need this extreme measure. In other cases, it might not be necessary.


As far as I know, nobody (including Singh) has criticized Mikovits for the collection protocol in her study. Science is not about scientists but about science. The growing body of knowledge has put the earlier collection protocols into question. That's really all there is to it. In my view, nobody has used this as some kind of excuse because Mikovits is some kind of maverick, but it's really the other way around: anyone whose data gets rightfully criticized (or his/her supporters), can claim some kind of persecution going on against him or her to deflect the discussion.

You may think that it's a safe assumption to completely rule out the possibility that there is ever anyone in the world who would ever intentionally corrupt the work of scientists. When we're talking about a multi-trillion-dollar question, I think it's reasonable to query how safe that assumption is, and many people would agree with me.

Anything is always possible. In this case, it is a) extremely unlikely and b) easily checked (by Mikovits/Lo, if someone alerted them). The ironic thing is that, because these scientists will not in a thousand years check up on this possibilty (because they know it would be a waste of resources), other people can keep the story alive.

But ClimateGate has taught us all that this assumption is no longer safe.

Not to start a new discussion, but I find this example rather poorly chosen. Climategate has been investigated by 8 independent commitees and none of them found any evidence of scientific misconduct. Perhaps you find this a classical case of a cover-up or something like that, but that is what would make the example a circular argument.
 

floydguy

Senior Member
Messages
650
Science is not about scientists but about science.

Science is conducted by human beings who have their own experiences, biases, ambitions, etc. Moreover, "Science" is most often funded by non-scientists who have their own biases, goals and ambitions. Before one even starts an experiment there has been a tremendous amount of "non-science" that has already occurred.

Perhaps in the days of garage labs one could conduct some real science. But with todays extremely expensive equipment, labs, regulations, etc I don't see how "Scientists" can avoid the bias for the results that the sponsor is looking for. And for those things that the sponsors don't want to find they simply won't fund the research. How is that "Science"?
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
The point is that they have. For the BWG and Lipkin studies, a more strict collection protocol has been used than the ones used for the Lombardi and Lo studies, exactly because it was thought that different collection times, methods and/or protocols may be (partially) responsible for the discrepant results.
That's right, and it's a positive development, I didn't mean to imply that some of these lessons are not being learned and the methodology refined.


It's also worth pointing out that scientific methodology should be put into its proper context. In some cases, measuring things to one billionth of a meter can be absolutely necessary, and in other cases it would be a waste of money and resources. The same applies here. In cases where people are claiming to have found "stuff" at (about) the limit of detection and this stuff is also identical or almost identical to known lab contaminants, you will need this extreme measure. In other cases, it might not be necessary.

Fair point. But I still think that in the case of comparisons with matched controls, it would be a wise precaution, and shouldn't be an onerous one, to collected matched controls at the same time of collection as any banked samples. That would eliminate a wide range of possible errors at a stroke.

As far as I know, nobody (including Singh) has criticized Mikovits for the collection protocol in her study. Science is not about scientists but about science. The growing body of knowledge has put the earlier collection protocols into question. That's really all there is to it. In my view, nobody has used this as some kind of excuse because Mikovits is some kind of maverick, but it's really the other way around: anyone whose data gets rightfully criticized (or his/her supporters), can claim some kind of persecution going on against him or her to deflect the discussion.
It has not come across that way to me at all; we clearly see this aspect from opposite perspectives. I've seen posts on the supposed sceptic forum Bad Science where aspersions were cast, almost from day 1, and Dr Mikovits was described using 4-letter-words. When points like this one about "handled differently" are made in raw form, they are very open to misinterpretation. I've seen a lot of comments made, throughout, that seemed to imply that the details being referred to suggested bad practice on Mikovits' part. From what I've seen, behaviour like that is what's prompted the claims of persecution. I think it's important when making the point about, for example, different collection of samples, to emphasise that this was not an unusual practice. If the people making that point don't put it in that proper context, it's fair for Mikovits' 'supporters' to defend her by pointing that out. I don't think that's a deflection from the discussion, it's typically been a reaction to misleading comments, at least so far as I have seen.


Anything is always possible. In this case, it is a) extremely unlikely and b) easily checked (by Mikovits/Lo, if someone alerted them). The ironic thing is that, because these scientists will not in a thousand years check up on this possibilty (because they know it would be a waste of resources), other people can keep the story alive.
I have no real way to assess the likelihood that anybody, anywhere in the world, may have an interest in suppressing something relating to this research. I don't see any basis for ruling out the possibility. I'm quite serious about suggesting the kind of secure protocol that others have suggested in these circumstances. There are ways to do coding using methods analagous to private/public keys, escrow, etc, which can rule out such suspicions. The general scenario we are discussing is one where an independently blinded challenge is issued. By definition, such measures are questioning the hidden bias, if not necessarily the integrity, of the original researcher. So by definition, trust is an issue here. If trust is an issue in one direction, that inevitably it reasonable arises in the other direction as well. Therefore, methods to guarantee trust and integrity of the blinding process are entirely appropriate in these circumstances.


Not to start a new discussion, but I find this example rather poorly chosen. Climategate has been investigated by 8 independent commitees and none of them found any evidence of scientific misconduct. Perhaps you find this a classical case of a cover-up or something like that, but that is what would make the example a circular argument.
You have misunderstood my beliefs on this completely, and on this matter, my professional interests make me confident that my example was not poorly chosen. I am currently working on an Open Data project, and have seen presentations at a national level that cite Climategate as a turning point in the realisation of the importance of open data. The point made there, with which I entirely agree, is not that Climategate was a case of scientific misconduct - I don't believe that it was. The point is that it turned out to be impossible for those involved to prove to the public that it was not, because of the data access issues. The pressure for access to the data, and the level of mistrust, was such that servers were hacked in order to obtain data. The lesson is that such fears can be allayed by opening access to data in the first place. Information wants to be free.

Now: The issue of open data in general is not really the same issue as the question of guaranteeing trust in independently blinded research which we were discussing above, but the underlying point is the same: Research needs to be completely transparent and provide the evidence necessary to disprove public concerns, if the scientists involved are to expect that the public should trust their research. Automatic trust is an outdated assumption. This applies both to government and to science. The public demands the evidence. Scientists should understand this better than anyone, because their instinct and training is not to take things on trust, but to test them and to demand evidence. The modern evolution of that principle is to understand that those same principles apply to public trust in the results that scientists report - and if, as with this example, the scepticism of some sectors of the public about certain assumptions exceeds that of the scientists involved, that seems to me a positive thing, and an opportunity for the scientific method to be strengthened by the input from a wider range of people than has traditionally been the case.
 

barbc56

Senior Member
Messages
3,657
It's also worth pointing out that scientific methodology should be put into its proper context. In some cases, measuring things to one billionth of a meter can be absolutely necessary, and in other cases it would be a waste of money and resources. The same applies here. In cases where people are claiming to have found "stuff" at (about) the limit of detection and this stuff is also identical or almost identical to known lab contaminants, you will need this extreme measure. In other cases, it might not be necessary.​
RRM, it sounds like this means the probability of finding a retrovirus/virus, found by the technology we have today, is so low it would be a wasted effort to continue.and what sciencist are discovering isbackground "noise" and not these viruses?​

This is what I have thought from what you have written above plus other sources but I want to make sure that is what you are saying.

Thanks
Barb. C:>)
 

jace

Off the fence
Messages
856
Location
England
I'm not sure why it's only in humans that gammaretroviruses are so hard to find - they've been found in mice and other animals for decades now, vis Jolicoeur onwards. Perhaps the ability to easily harvest other tissues from lab animals makes the difference, though I fail to see the problem with mucosa samples from the respiratory system or from the gut.