mark said:
this is an interpretation that doesn't imply any confirmation bias or dubious practices on the part of the researchers, this seems to me the most likely interpretation.
mark said:
It's an interpretation that is consistent with all of the information presented in both the paper and the slides...
I would be nice if this was true but it isn't.
I really can't see your logic here, I'll need to see the workings of your argument. I said that mine and Bob's interpretation is consistent with all the information in the paper and slides you presented. If you assert that isn't true, you'll need to present quotes from the paper and slides that are inconsistent with our interpretation of what may have happened. I've read both in full, with particular reference to the sections you quoted, and I don't see anything that contradicts what we've set out.
First of all, you are misreading the "cohort of 300" bit in the presentation. The entire Lake Tahoe cohort is about 300 (it's actually "just" 259), not the cohort that they had in their blood repository. These people were diagnosed from 1984-1987. Some people may have died or may have moved. Not all people want to participate. For instance, in a 2001 follow up study by Peterson, 123 of the original cohort of 259 returned questionaires (source: http://www.cfids-cab.org/cfs-inform/Prognosis/strickland.etal01.txt)
.
That's not evidence from the paper or slides, so it doesn't have any bearing on whether my statement was false as you claim. But in any case, the info you add about the size of the available cohort does not mean that the cohort they originally tested for levels of cytokines and chemokines can not have had more than 118 people in it, which is your claim. So it's less than 300, it's 259 or less - so what? Did they need permission or questionnaires to run tests on the 259 blood samples they had? I haven't seen evidence that's the case. Even if it was 123 that they studied and they discarded 5 who weren't XMRV+, as Bob and I described, that would be a 96% positive rate and not the 118/118 you are asserting.
Second, although you also mention this, Lombardi's states that "they all corresponded to XMRV" and I think it is very clear what he means here.
Again this is a separate source from the paper and slides so does not bear on my statement which you said was false. This evidence you've introduced is a comment by Lombardi in a youtube interview for an audience of patients, and your whole case appears to hinge on assuming that when he said "they all corresponded to XMRV" that he definitely is asserting correctly that 100% of the samples tested positive for XMRV. I really don't think you can assume that. If 95% of them had, might he not ever say, casually in an interview, "they all corresponded to XMRV"? Note "corresponded to", he doesn't even say "they all tested positive for XMRV". This is the only piece of evidence you have produced in support of your claim that they tested 118/118 of those samples as positive, and I don't think it's strong enough to be certain that this is the case. Surely that's why scientists produce papers and scrutinise and peer-review them: to make sure that everything they said is strictly accurate? I don't see how you can rely on that "all" meaning "100% of the Lake Tahoe samples we tested".
Third, there is just no way that they completely did the XMRV testing and statistical analysis in the same time-window as they were preparing the Science study. To quote Judy Mikovits (from her interview in Nature), after the XMRV discovery "we really retooled our entire programme and did nothing but focus on that.”
I really don't accept this argument either. I've set out above how it's perfectly conceivable that they had time. Firstly, in the scenario Bob and I sketched, they had all the data collected, the analysis had been run, the paper was basically ready to go, and then they found XMRV. They tested the samples from that study for XMRV as part of their XMRV work (this testing was unpublished, presumably somewhat exploratory, perhaps this was how they refined their testing in preparation for the full study). They decided to take the samples that tested positive and re-run the analysis on them only. As Silverman mentioned: this was now absorbed into their XMRV work so it's now part of focusing on XMRV. As I mentioned above: there were several names on this paper, and all that was needed was for one or two of those people to re-run the software statistical analysis and use the revised numbers in place of the first set obtained on the entire cohort. Since the software was presumably ready to go, this may have been a matter of editing a file to remove x numbers (where x is between about 5 and 100) and hitting 'run' - it could be done in a day, with no lab testing, by one of the collaborators; they just needed the list of numbers of samples that tested positive. And finally, what this resulted in was the slides, not the paper, which wasn't published until 2011. It's perfectly conceivable that someone in the team had ample time to do this.
The only part of this work that you might say there's a challenge to fit into the time frame is testing the 118 samples for XMRV, when they were also running the testing for the Lombardi 2009 paper as well. I'm not convinced of that, but of course if it were true that they didn't have time, it would nullify your claim that they got 118/118 on these samples and this was evidence of "confirmation bias". That can't be the case if they didn't even test them.
Finally I want to point out that even in your scenario there are some " dubious practices" going on. After all, the authors would then still have omitted from the paper that their "XMRV cytokine cohort" came from a single outbreak, making their results appear more generelizable than would be warranted by knowing all the underlying data
On the first point: yes the paper fails to go into detail about precisely where the cohort came from, except that it says "from the WPI's sample repository", which I
think was known to be comprised of samples from Peterson's Lake Tahoe cohort. And yes, it would be ideal to go into just a little more detail and say "from the Lake Tahoe outbreak".
But then, just think about these cohort arguments in context of the history of CFS definitions, and your criticism that this "makes them appear more generalisable than would be warranted" is seen to be completely upside-down. To suggest that this is a "dubious practice" to simply call this 'a CFS cohort' is to turn the world on its head. Just remember how and when the term "CFS" was invented: it was invented
specifically as a supposed description and definition of this very disease, when it broke out in Lake Tahoe. So if you are now criticising them by saying "they are calling this CFS, but is it generalisable to all CFS? Is that being misleading?" - that's quite ridiculous when you think about it. What cohort could possibly have a better claim to be "CFS" than a cohort from the outbreak from which the name was coined? This is true "CFS".
And so if results from this cohort are
not generalisable to "CFS" in general, and your concern is valid, what then does that mean? Surely it means what we all know to be true: that the invented name and definition "CFS", created to label this outbreak disease, does
not describe it accurately and defines a mixed cohort, so much wider that by definition it is impossible to obtain useful scientific results from studying it! So: are you questioning them for studying the actual disease, which was labelled as "CFS", and simply calling it "CFS"? When you talk of "dubious practices", surely you should be talking mostly about the creation of the misleading and obfuscating label "CFS", which effectively rendered study of this particular disease impossible for 25 years?
The same situation applies to a criticism of Lombardi et al that came from voices such as the UK psychiatrists: they said (paraphrasing): "one problem with this paper is it uses a definition of the disease, a set of selection criteria, which are very strange, and not clear, and not internationally recognised". What anybody who knows anything about ME understands full well when they say this is: They are criticising them for studying the actual disease! Using the best and most appropriate criteria possible! Lombardi et al selected people with profound disability, who fulfil the strictest consensus definition available - the Canadian Consensus Criteria -
and who fulfil the Fukuda Criteria -
and who present with profound disability....and Wessely, White and crew
criticised them for not using their own made-up Oxford Criteria which requires only prolonged unexplained fatigue! If you want to talk about "dubious practices" for not mentioning in this paper that the cohort came from Lake Tahoe, then come on: "dubious practices" is not going anywhere near far enough to describe the arguments and research practices of those who use the Oxford Criteria to study "CFS/ME". I'm not going to mince words on this: arguments like their questioning of the Lombardi et al cohort selection criteria are the arguments of scoundrels.
(and, on a side note, it lends credibility to some of the earlier criticism of the Science paper by the Dutch group, who said that Mikovits had stated in an earlier presentation that the XMRV findings were also from the single, Lake Tahoe cohort).
What?! How? The Science paper said the samples it used were collected from about 10 (?) clinics across the US in areas where outbreaks had occurred. The cytokine study was, apparently (based only on slide 3 of the presentation), a study of samples from Lake Tahoe only, and apparently those were also tested for XMRV. Mikovits (is said to have) presented, in an earlier presentation, on the XMRV findings from the Lake Tahoe cohort. The Science paper described a different cohort. Clearly these were different cohorts.
Clearly there's no inconsistency in testing two sets of samples, giving a presentation on one and publishing a paper about another. Unless you have any further evidence you're not revealing, the argument you've presented just suggests that the Dutch group are in the habit of adding two and two to make five. They seem to have leapt to the incorrect conclusion that these two separate cohorts
must both relate to the same single study, an assumption which makes a nonsense of the stated facts, and therefore they conclude that the WPI had described their work inaccurately. Since they seem to have repeated the far too familiar pattern of making a series of unjustified and incorrect assumptions about the presented evidence, highlighting inconsistencies between their own incorrect assumptions, and then concluding that the people who presented the evidence
must be in error, perhaps
they are the ones with the confirmation bias?