Here's what I see so far as potential problems with this study:
- they didn't use the same methods as the original study. Both used nested PCR, but PCR is a tool, not a test. The tests themselves -- the primers used, etc -- are different. That may or may not make a difference in accuracy, but we can't assume that it didn't, and neither should they. They only tested their test on one single positive control specimen, sounds like, and that's problematic, too.
- they didn't use the same cohort. Now, I think that it's going to be very, very important as time goes on to test very different cohorts, because the cohort in the original study was very specific and very severely ill; those results may not hold up across a wider spectrum of CFS, and that's not because there are "real" CFS patients and "fake" CFS patients, but because there could, for example, be multiple underlying causes that are producing similar sets of symptoms. Regardless, we're not at a point yet where it seems appropriate to use a totally different cohort. Scientists need to replicate first to confirm or fail to confirm the original finding, then start fiddling with the variables. This study fiddled with the variables first.
- They didn't have a healthy control, they ran water instead for a negative control. Had they come up with positives, they would have had no way of finding statistical significance of XMRV in the CFS patients unless they then added in some healthy controls. That they didn't bother to find healthy controls is a bit strange to me, honestly.
- <I>PLoS ONE</I> is not a podunk publication by any means, but it ain't <I>Science</I>. It's relatively young, it's still viewed with a variable level of trust depending on who you're talking to, and its peer review process is different from that of most of the long-respected journals. Does that mean this study is lame? No, not at all. But it's worth noting, too, that the peer review process here was different from the peer review to which the original study was subjected.
- The science aside -- and it is the important point -- I'm having a really hard time feeling like defending these folks, not because of who's involved or whatever, but because the arrogance in some of their quotes is frankly pretty outlandish. Scientists get nasty and swipe violently at each other all the time, but they usually frame it in much more careful and underhanded language than some of this. That doesn't cast doubt on the science for me, necessarily, but it should serve as a reminder that there is, in fact, a hugely political element to lots of research, and especially research on this topic.
Just my takes. Others can add whatever they might see, and if anybody sees issues with my issues, they should also chime in.
I don't really want to get in the middle of this venting thread, but have to comment, but not because I want to defend the IC study, rather just to clear up some misconceptions about this type of research that are running rampant on this thread.
1. Using a different cohort is a validation tactic, and very useful to help clarify who does and who does not have a given illness. All I really care about, all any of us should care about, is whether WE are included in the test cohort definition. There may be many definitions of ME/CFS but in my experience we nearly all have the same basic illness, we are just at different stages. If WPI says XMRV is found in 98% of PWC, almost any definition should turn up something. And if they found XMRV in 20 year old samples, so should other labs. So the cohort is almost a non-issue in this case. Cohort would be a major issue if we were dealing with a small subset of PWC, certainly, but the claim being tested here is that XMRV is a superset issue for CFS.
2. There is a difference between pure replication and validation. Regardless of what people call the IC study it was clearly an attempt to validate the general thesis of the WPI study, and not a pure replication. They even took some extra steps to avoid problems they thought might have been present in the WPI study. Not many labs like pure replication studies, they want to 'do their own thing' use their own tests, cohorts, etc. When someone makes a bold claim, such as a retrovirus causing CFS, we need to see many different types of validation and replication attempts, coming from all angles. That is how we determine what is really happening. This IC study is useful, even if it is not a true replication effort.
3. Using different methods, different primers, different types of PCR tests, that is all GOOD in attempts to validate a new finding. We want to see many different test designs all find the same thing. This is not rocket science, it is a basic antigen supposedly ubiquitous in the ME/CFS population, and it should be relatively easy to find many different ways. Based on the tolerances published in the Science article, other labs should easily find the antigen, one lab I know was able to increase sensitivity at least 10x over the WPI study, certainly IC could do that as well. Most important in fact is to target all areas of the genome, we should see tests for gag, env and pol sequences. You have to find the WHOLE bug and not just one part of it. If WPI only found part of a bug in their PCR test, it might not actually be XMRV.
4. No control was really necessary since they found nothing, most labs would stop worrying about controls once they saw the pattern of zero findings. Unless you suspect a reverse finding, that controls will have the bug and PWC will not, forget about controls.
5. If a test hits on a known positive control it works. That pretty much ends debate about reagents, which probes used, etc.
6. Some people seem to think that 'peer review' is a big deal. In reality, peer review is simply an informed review and edit of an article by an expert. The peers do not go to your lab, or in any way verify that you did the experiment properly. All they have is your words on the page. So there is no 'gold standard' for peer review and the reputation of the journal has little to do with the whether a given study can be verified or not. In particular, journals like 'Science' and 'Nature' have their reputation in part because they are the top forums for DEBATE about difficult topics. Outside scientists can submit comments and rebuttals, and are probably encouraged to submit articles with conflicting viewpoints. Then the original authors answer, although they can also sit on the questions for a time (which I wonder about in this case since we have not seen any rebuttals yet and should have by now). I have been a peer reviewer, and really, it is not very glamorous. Glorified editing, simple fact checking, making a few recommendations. Something you have to find time for because it is not paid. Researchers put their reputations on the line no matter how or where they publish their results, so I think we owe the same respect to the many authors of the IC study as to those of the WPI study, and debate their data on its own merits, rather than by disrespecting people involved.
7. Some people seem to have forgotten the many warning statements made by WPI. Such as, XMRV could be a passenger virus. This research is early stage, things like that.
I think your list is right, spit; but I disagree with this. Upon publishing a paper reviewed only by an editor for 72 hrs, PLoS became little better than a blog. They and the paper writers would get laughed out of the room in my profession.
8. Many respectable academic journals are only editorially reviewed, and not peer reviewed. Peer review is slow, and sometimes inhibits sharing of new ideas that the gatekeeper reviewers do not agree with. Perhaps they should be called magazines though, I agree that they have qualitative differences from the peer reviewed journals. But certainly they can be much better than blogs.
I think only PLoS ONE is relatively unreviewed. Not PLoS in general.
I dont care whether it was reviwed, really. I can review it myself. I dont know quite how review works though -- I think sometimes more stuff is submitted than shows up in the paper. So if such exists, I wish I could see it all.
9. In my experience peer reviews are mostly editorial. Some peer reviewers have a hobby horse and they want you to include extra references, or adress some pet issue of theirs. That gets annoying. Then others want you to cut out vital parts to tone things down. Peer reviewers do improve the article text, but can not really pass judgment on whether a study was valid. As I mentioned above, they do not go to your lab and see if you did things properly. Peer review is useful but also oversold. So I agree with Eric.