Scandal in BMJ's XMRV/CFS Research

G

Gerwyn

Guest
While everybody's rejoicing can I point some things (and then run for cover :))



Well characterized some does not mean that the patients were characterized in the same way as the Science study. It simply means that they knew who those patients were. With regards the characterization of the Science study patients - the WPI initially said they all the patients had those characteristics, then said they didn't. They went from highly disabled immune deficient, repeat exercise blasted patients to 'typical CFS patients'.

First they simply stated they cultured the samples to find the virus then they stated they actually had to look at samples taken at different times from the patients. So much for well characterized patients in the Science study.

Here's where I run for cover....::)

I thought the rest of Part I was very well done! The BMJ clearly exposed their zeal at negating XMRV in their editorial. They almost said 'we couldn't help ourselves". Nicely written Parvo..
No cort this study claimed that the patients were a well characterised CFS cohort -They were clearly not that is the point

The term well characterised cohort has a very specific meaning .

The authors had absolutely no idea who they were

The WPI patients are clearly well characterised according to the CCC and Feduka criterea.

You are confusing terminology.TheCFS cohort in the dutch study were not true CFS patients at all.

A minimum of 36% of the patients had clinical depression.It is entirely possible that the figure was as high as 68%.
The key point is that the researchers who originally recruited the patients themselves stated that the patients symptoms were caused by psychological factors either partially or entirely and thus not CFS patients . The fatigue was of psychological origin.if you doubt that I suggest you look at the data tables presented in Parvos excellent critique.

To put it bluntly if the authors of this study reported that it was a well characterised cohort they either dont understand the disease area at all or were being disingenious
 

Cort

Phoenix Rising Founder
Messages
7,361
Likes
2,059
Location
Arizona in winter & W. North America otherwise
It was a little disturbing that a group that otherwise so perfectly matched the Science cohort with reproducible immune abnormalities, would only have 26% of patients with recurrent infections. And maybe moreso that patients didn’t mention or even describe post-exertional malaise, the pathognomic sign for Canadian-Criteria ME/CFS. Then again, the Vercoulen et al work predated the Canadian Criteria. Ah, but that’s just one of the pitfalls when you’re living in the fast lane with BMJ. Be reasonable – world-class scientists researching the next AIDS can’t be expected to wait to use the Canadian-Criteria when they’re in a rush – and have 16-year old Oxford Criteria blood handy!
For me the Symptom Section is the most disturbing part of the document thus far. Some of the cardinal signs of ME/CFS are not presented that significantly. While myalgia is high, only 50% had concentration problems, only 43% had sleep disturbances - that is a glaring difference! Only 13% had sore throat. Sore throat isn't as common as was first believed (altho I had one for about 10 years) but still only 13%.

This really seems to me to be a very good indication that these were not your typical CFS patients and that this cohort was not representative of us. In a sense this group was pretty well characterized but they don't appear to be CFS patients (!). Good stuff Parvo!

t was a little disturbing that a group that otherwise so perfectly matched the Science cohort with reproducible immune abnormalities
No one said this. How could they? RNase L wasn't even around then. Nor were repeat exercise tests or functional NK cell tests. It was described as a well characterized cohort of CFS patients not WPI patients. It appears they weren't very 'CFS'ey' at all.

It was wonderful to see the patients given so much latitude to define their own symptoms! Far simpler too, for them to do this as a take-home questionnaire, and to self-report their medical status, than to submit to all those pesky “longitudinal measurements of clinical and laboratory abnormalities” that those Yanks used. (http://www.sciencemag.org/cgi/content/full/1179052/DC1 ).
This is kind of an unfair crticism. No one group - for or against XMRV - is going to try and redo the entire WPI Science article. They'll take it piece by piece - concentrating on the most important parts first - ie first do the PCR test to see if the virus is there or not.
 

Cort

Phoenix Rising Founder
Messages
7,361
Likes
2,059
Location
Arizona in winter & W. North America otherwise
I wasn't referring to the Dutch study, Gerwyn. I was referring to Parvo's statement that the Science patients were 'well characterized'. You''ll see that I basically agree with you that this was not a group of CFS patients as we know them.

And together, they still only (weakly) explain 28% of the variance. Which rather nicely summarizes the perspectives of many CFS researchers 16 years ago – and even today. They don't know what they're studying. Yet this reinforces the British and indeed Dutch paradigm to deny ME/CFS patients physical testing and diagnostics. Why bother when it’s all psychosomatic?!
I agree that this is a real problem. 28% should not be a convincing number; it should be a warning sign to look elsewhere!
There was no reason for me to doubt the medical establishment – or even the Chronic Fatigue Immune Dysfunction Association of America – who had encouraged patients and the international media to relax about pesky cohort issues.
I think you're right - I think there is a cohort issue here.

But this

“Minimalized the risk of including patients with
delayed convalescence of a viral infection”.
So they had to have their symptoms for a year? Is that what they're saying? So they didn't have 6months-1 year CFS patients? What's the big deal? Do I have this wrong?

36% patients who on one score may have clinical depression? This does seem high but then again its probably only a matter of degree. Depression is increased in every major chronic illness. What is typical of CFS? 25%? 30%? - I don't know. So it seems a bit high to me but then again depression is going to be in there and its not excluded by any definition. Its the symptom profile that is really disturbing to me.

Cohort, Cohort, Cohort - You started out with a question about the cohort - that is the key question here. Was that group representative? It sure doesn't look like it -which is an important finding I think! If you don't mind I'm going to post it in the XMRV Buzz page I put out. :)
 

Cort

Phoenix Rising Founder
Messages
7,361
Likes
2,059
Location
Arizona in winter & W. North America otherwise
Whether you agree with Dr. Shepard or Dr. Vernon or Dr. Gow or Kim McCleary or even Dr. Mikovits - they are all on our side. They are all doing what they can to increase research funding and treatment for this disease. They are not the bureacrats in the NIH or CDC who are cutting off funding for this disease every year. They're not the bureacrats in the UK who insist that all medical research funding go to behavioral studies. They are not the doctors who slam the door in your face. They are not the researchers or public officials smirking about CFS patients behind their backs.

Dr. Vernon was not alone in her concern about that cohort. Dr. Bateman elicited concern. The BMJ authors stated that they had asked for further information on the group and didn't get any. Its a legitmate concern. One of Dr. Klimas's patients just noted her worry about XMRV and the VIP Dx's 'intransparency'. That's from the most fervent supporter of XMRV in the beginning. Attacking someone for being the bearer of bad tidings is poor stuff in my opinion.

Why is the CAA not commenting on any more negative studies? Its because of comments like the one I mentioned. Why has MERUK not said anything for 3 months? Why has the IACFS/ME not said anything for three months either? I warrant its because they know if they say anything negative they know they'll get hit with stuff like that. That's what the CAA has gotten anytime they've posed any questions about that finding. At least the CAA had the guts to present their take on the negative studies.

I want all informed opinions. In this atmosphere we surely won't get them.
 

parvofighter

Senior Member
Messages
440
Likes
129
Location
Canada
A brief visit "in" - then "out" for a bit more

Hi Julius,
My question is about this point; the attempt to Minimalize the risk of including patients with delayed convalescence of a viral infection. To do this they required that the symptoms be present for at least a year. So wouldn't that be good, in that it would allow for chronic viral infection (herpes, EBV, XMRV) , but rule out a transient viral infection (flu, cold)? Like I said, I'm sure I'm just fogged out on this.
Excellent question and I agree - this issue needs to be nailed down... whether that can happen 16 years after the fact if background documentation has long since been shredded (those old microfiches melted, eh?), and this cohort already positioned as "legit" is another question. Should we just take these guys' word for it... "What they meant?". Or maybe hold their feet to the fire. Cort - all I'm asking is that ALL XMRV research is subjected to the same level of "sleuthing" and scrutiny as WPI. The imbalance in my eyes IS unprofessional, unscientific, and as you have seen here - profoundly disturbing. But it can be redressed.

Whether the Dutch folk have robust and convincing material to support whatever they say about intentionally and openly trying to "minimalize the risk of including patients with delayed convalescence of a viral infection" is something else. After all, these guys - intentionally or not - tried to pawn off on us a severely flawed cohort. Given that 3 scientists cross-pollenated across the source study and the BMJ study (one would hope that they "checked" their cohort's integrity before jumping into the RaceJ), leads one to wonder whether this is systemic incompetence, wilful deceit, or just blithe indifference to "playing by the rules", given that after all, they are dealing with a bunch of whinging gits. This "hiccup" already crosses the UK and Netherlands. It does tend to cast a bit of an ethical pall on the rest of the XMRV-cohorts. If this can be such a debacle, what ELSE aren't they being straight - or competent - about? And that BMJ could be so totally blindsided doesn't speak much for their ability to discriminate good XMRV research from - well, crap. Remember,

We and our reviewers also thought it was well done.

Maybe they need to raise that ethical and research quality bar a tad. What has swayed me toward the likelihood that there are VERY few (if any) patients with viral etiology in their cohort was the fact that they said,
"Minimalize the risk of including patients with delayed
convalescence of a viral infection",

NOT
"Minimalize the risk of including patients with
convalescence of a viral infection".

And it raises the question of just WHY they were so adamant to keep out the viral cases. Again, questions need to be asked, and answers weighed in the context of the integrity of previous data.

I do agree with Cort that this is a golden opportunity for the CAA to (Yikes Dr YES!) to do the right thing, and go back to square 1 in their analyses of past and future cohorts. Do the good fight for EVERY XMRV study. Apply that sleuthing excellence (and Dr Vernon DID ask some important questions to WPI) equally, across the board. I'm personally very optimistic that this might be a turning point for the CAA in this regard.

Cort, I want to thank you for taking the time to look over this thread. I honestly wasn't sure where to put it, and figured that it would settle ultimately where it belongs. You are very gracious to accept multiple opinions on this site, and I do appreciate that. It builds a richer community. You always have something constructive to add on the science, and I will get back to you on some more thoughts after my next retreat in bed...

I should add however that I terrorized our dogs when I broke into hysterical (happy) laughter when I read Dr Yes' satire. I didn't take ANY of it personally, nor did I interpret it as anything other than an extremely witty - and very pleasantly warped -opportunity to pop the bubble of a moment that was maybe getting too officious. If you read it as satire playfully directed at me (which is exactly what I did), then I personally found it to be enormously funny. I've re-read it a couple of times, and honestly my first instinct was (Yikes - differences aside, I wasn't intentionally abusing the CAA's name, and I WILL TRY HARDER!); and then (GEEZ, that was FUNNY! in a self-deprecating way.). Honest, I think it was very harmless (other than to our dogs) -with just a gentle but humorous poke at me that if I"m going to be THIS anal, I should get ALL my facts straight!

As to anyone reposting, cutting/pasting, writing letters from it - GO FOR IT! My only request is that you give attribution - just copy and paste the web link http://www.forums.aboutmecfs.org/showthread.php?3860-Scandal-in-BMJ-s-XMRV-CFS-Research , so that both Cort's host forum, and me the Parvowriter get acknowledged.

Be back soon...

Parvo (furiously removing skin cells and paw marks from my keyboard):Retro smile:
 

Cort

Phoenix Rising Founder
Messages
7,361
Likes
2,059
Location
Arizona in winter & W. North America otherwise
leads one to wonder whether this is systemic incompetence, wilful deceit, or just blithe indifference to "playing by the rules", given that after all, they are dealing with a bunch of whinging gits. This "hiccup" already crosses the UK and Netherlands.
I think you made a great point how these things can spread. The samples from this cohort were used in several studies. This kind of thing can have a real impact. I imagine that it happens frequently - on both sides of the aisle.

This is one reason to put in a plug for the CFIDS Association actually - and their proposed research network and the patient BioBank and database. They will rigorously characterize the people in that bio bank. They'll do a rigorous enough analysis that hopefully they can start to tease out the different subsets in this disorder; maybe there is a group of patients that looks like the Dutch group (high muscle pain, low rate of infections, high rates of gastrointestinal problems - in some ways they sound like FM patients more than CFS patients). Then there's gonna be a group with high rates of flu-like symptoms, problems sleeping, sore throats plus probably muscle and joint pain, etc. That you can find out with a Bio Bank - when you throw gene expression and lab tests in there - if you have a pretty good sample size should pretty quickly be able to tease out subsets simply by mining the data.


With regards to the 'satire', yes, it could have been worse but the reason I found out about it all was from a PM that got to me pretty quickly from someone who was pretty dismayed by it. This isn't the first time we've gotten complaints about a double standard.
 

Cort

Phoenix Rising Founder
Messages
7,361
Likes
2,059
Location
Arizona in winter & W. North America otherwise
Here's my take from the XMRV Buzz page:

http://www.aboutmecfs.org/Rsrch/XMRVBuzz.aspx

The Cohort Question AGAIN! - We've spent alot of time on that original Science cohort, but they're not the only cohort involved. After digging deeper into the cohort from the last Dutch study Parvofighter on the Phoenix Rising Forums has uncovered some disquieting facts about them. We knew that they were defined using the Oxford definition - which is not optimal, for sure - but not necessarily a game changer but what Parvofighter uncovered goes beyond using a poor definition.

The samples from that study were gathered a long time ago - prior to the creation of the standard Fukuda criteria that researchers use today. We're talking early in the recent history of CFS when things were even foggier than they are today. Their symptom profile appears to be an odd one; they were clearly ill but in significant ways they just don't look like CFS patients. This is what they looked like.

Seventy-one percent had muscle pain - fine, but only 51% said they had difficulty concentrating, headache, gastrointestinal complaints and dizziness were pretty common (@45%) but only 43% had sleep disturbances. Both sleep problems and trouble concentrating seem to be present at substantially lower percentages than are usually found in CFS. Recurrent infections came in at a low 26% and sore throat, one of the cardinal symptoms of the Fukuda definition is down at 13%. Sore throat isn't believed to be as common in CFS as was once thought but its still far more common than 13% in this cohort. Unfortunately postexertional malaise was not part of the symptom picture at that point and we don't know about that.

In short, while the group does have some similarities to CFS as we know it, in several significant ways this group does not seem very "CFS'ey" - a fact that could certainly complicate a researchers ability to find XMRV in it. This was a study that appeared to almost immediately put the nail in the coffin for XMRV for some researchers but one wonders just how likely it was that it was ever going to find much XMRV at all.
 

bel canto

Senior Member
Messages
246
Likes
467
I read the post from Dr. Yes as SOLELY meant to make us laugh - which it did - had nothing to do with criticism of the CAA!! It sounds like someone quickly read it and totally misinterpreted it. And fired off a premature PM to complain.
 

parvofighter

Senior Member
Messages
440
Likes
129
Location
Canada
Let's move on, eh?

OK folks, let's move it back to the science. For what it's worth, before all the open discussion of Dr Yes's wonderful satire (that I thought had wonderful comedic timing, and from my perspective was humorously and warmly directed @ me solely) I sent him a PM with the following:

Terrified canines
Dr Yes. I scared all of our 3 dogs, I was laughing so hard. So THIS is what New Yorker humor is like! Aaaach - that was one of the funniest posts I have ever read. Thanks for making my day!

Now I REALLY gotta go. More later. Be scared. Be very scared.

Cheers, Parvo

Now back to the science, OK? How do we get this scandal to the senior BMJ editorial board? To the press? How do we enlist the CAA to help fight the good fight? BTW Cort, I totally agree that the CAA's cohort definition & biomedical research is extremely timely to help sort this out. We desperately need a completely new (and rigorous) lens through which to view all the seriously flawed cohorts of the past.

I take bad science on ME/CFS VERY personally because it's robbing me of time as our kids are growing up. How do we get this factual scandal "out there"!

:Retro smile:
 

Dr. Yes

Shame on You
Messages
868
Likes
45
Back To You, Parvo! And to Stuff We Can All Agree On!

OK folks, let's move it back to the science. How do we get this scandal to the senior BMJ editorial board? To the press? How do we enlist the CAA to help fight the good fight? BTW Cort, I totally agree that the CAA's cohort definition & biomedical research is extremely timely to help sort this out. We desperately need a completely new (and rigorous) lens through which to view all the seriously flawed cohorts of the past.

I take bad science on ME/CFS VERY personally because it's robbing me of time as our kids are growing up. How do we get this factual scandal "out there"!

:Retro smile:
-----------
 
G

Gerwyn

Guest
I wasn't referring to the Dutch study, Gerwyn. I was referring to Parvo's statement that the Science patients were 'well characterized'. You''ll see that I basically agree with you that this was not a group of CFS patients as we know them.



I agree that this is a real problem. 28% should not be a convincing number; it should be a warning sign to look elsewhere!


I think you're right - I think there is a cohort issue here.

But this



So they had to have their symptoms for a year? Is that what they're saying? So they didn't have 6months-1 year CFS patients? What's the big deal? Do I have this wrong?

36% patients who on one score may have clinical depression? This does seem high but then again its probably only a matter of degree. Depression is increased in every major chronic illness. What is typical of CFS? 25%? 30%? - I don't know. So it seems a bit high to me but then again depression is going to be in there and its not excluded by any definition. Its the symptom profile that is really disturbing to me.

Cohort, Cohort, Cohort - You started out with a question about the cohort - that is the key question here. Was that group representative? It sure doesn't look like it -which is an important finding I think! If you don't mind I'm going to post it in the XMRV Buzz page I put out. :)
If you look at the authors work they claimed a well chacterised cfs cohort when they were not either deliberate or accidental misrepresentation.Either way the paper is invalid.It is not a question of whether it looks like there is a problem.The authors themselves state that the fatigue involved had psychological causes directly or indirectly.There is no need to make a subjective judgement here.

you are wrong Cort

you need a minimum score on their assessment tool to be diagnosed with clinical depression This does not mean comorbid depression as somtimes found on patients with ME it is measured differently assuming the psychiatrists involved play by the rules of course!

The latter appears to occur disproportionately in the bedbound who would not have self referred for CBT

The FEDUKA criterea attempted to exclude patients with depression caused by psychiatric OR Medical causes. Oxford ONLY Medical causes depression specifically not encluded. But because Feduka includes Or

The patients diagnosed according to Oxford can masquerade as meeting the Feduka criterea

The Canadian criterea exludes both, plus has post exerional worsening as mandatory.This is what The CAA should be pushing for

From a CBT proponents perspective it is actually advantageous to include patients with milder forms of depression The fatigue associated with this catergory responds disproportionally well to CBT because that kind of depression responds dispropotionally well to CBt. Including even a small cohort of this nature will make it appear as if the entire CFS population has shown appreciable benefit from CBT especially if the results are quoted as percentages and individual patient responses are not included.The way to avoid displaying the results in detail is to use the phrase "They were a well characterised cohort of CFS patients. They of course never add "according to our mickey mouse selection criterea"
 
G

Gerwyn

Guest
I think you made a great point how these things can spread. The samples from this cohort were used in several studies. This kind of thing can have a real impact. I imagine that it happens frequently - on both sides of the aisle.

This is one reason to put in a plug for the CFIDS Association actually - and their proposed research network and the patient BioBank and database. They will rigorously characterize the people in that bio bank. They'll do a rigorous enough analysis that hopefully they can start to tease out the different subsets in this disorder; maybe there is a group of patients that looks like the Dutch group (high muscle pain, low rate of infections, high rates of gastrointestinal problems - in some ways they sound like FM patients more than CFS patients). Then there's gonna be a group with high rates of flu-like symptoms, problems sleeping, sore throats plus probably muscle and joint pain, etc. That you can find out with a Bio Bank - when you throw gene expression and lab tests in there - if you have a pretty good sample size should pretty quickly be able to tease out subsets simply by mining the data.


With regards to the 'satire', yes, it could have been worse but the reason I found out about it all was from a PM that got to me pretty quickly from someone who was pretty dismayed by it. This isn't the first time we've gotten complaints about a double standard.
if you have a pretty good sample size should pretty quickly be able to tease out subsets simply by mining the data.

That is of course what The WPI used blood from a biobank not a bloodbank.

The concept of subsets is at the moment an unproven hypothesis.if anyone is undertaking any work assuming this to be fact then they are immediately departing from the scientific method and will interpret any data according to their theoretical preconceptions.That is entirely unscientific Retrospectively trawling for data is not science either.The psychos would simply rip it apart

QUOTE=Cort;59123]I think you made a great point how these things can spread. The samples from this cohort were used in several studies. This kind of thing can have a real impact. I imagine that it happens frequently - on both sides of the aisle.

I cant understand you imagining it happening on both sides of the aisle.There is no evidence of this problem in any research carried out by proponents of biomedical causation

The research carried out by proponents of psychological causation on the other hand has produced copious quantities of evidence of results corrupted by poor cohort diagnosis especially by the deliberate inclusion of depression and the deliberate exclusion of the majority of symptoms reported by patients with ME.
 
Messages
4
Likes
0
Location
Autralia NSW MidNorthCoast
we could try contacting the editor of the Lancet.Thet are very much in competition to the BMJ
The Lancet may even have a conscience re misleading "science":


"A British medical journal has retracted a controversial study which suggested a link between autism and the measles-mumps-rubella vaccine known as MMR.

The now-discredited research by Doctor Andrew Wakefield caused vaccination rates to plummet in Britain.

Six years ago The Lancet admitted it was wrong to publish the research, but it has taken the journal 12 years to retract the Wakefield paper.

It emerged that Dr Wakefield failed to disclose that he was being paid to investigate claims that children had been damaged by the MMR vaccine.

It took an in-depth investigation by the General Medical Council to finally convince the journal to formally retract the paper.

The council found Dr Wakefield had been dishonest and irresponsible in his research methods.

-BBC"

The concern is how much damage may have been caused by such widely repeated but unfounded assertions in the 12 years between print and retraction.

Maybe The Lancet could have something to teach the BMJ!
 

julius

Watchoo lookin' at?
Messages
785
Likes
5
Location
Canada
For 12 years Dr Wakefields opponents were unable to refute his science, even with the heroically corrupt Poul Thorsen on their side. Finally they resorted to nitpicking technicalities, which led to the ridiculous retraction of that paper.

Truly a black mark on the history of medical science.

Discredited....please! What researcher is not getting paid? The WPI study is certainly discredited because Dr. Mikovits gets a salary.
 
G

Gerwyn

Guest
For 12 years Dr Wakefields opponents were unable to refute his science, even with the heroically corrupt Poul Thorsen on their side. Finally they resorted to nitpicking technicalities, which led to the ridiculous retraction of that paper.

Truly a black mark on the history of medical science.

Discredited....please! What researcher is not getting paid? The WPI study is certainly discredited because Dr. Mikovits gets a salary.
totally agree especially when the association between vaccines and the activation of preexisting latent infections is so well known
 

Dx Revision Watch

Suzy Chapman Owner of Dx Revision Watch
Messages
3,045
Likes
6,039
Location
UK
Ed: Note Richard Smith is a former editor of the BMJ


http://blogs.bmj.com/bmj/2010/03/22/richard-smith-scrap-peer-review-and-beware-of-“top-journals”/


Richard Smith: Scrap peer review and beware of "top journals"

22 Mar, 10 | by julietwalker

Richard Smith

The neurologist and epidemiologist Cathie Sudlow has written a highly
readable and important piece in the BMJ exposing Science magazine's poor
reporting of a paper on chronic fatigue syndrome, (1) but she reaches the
wrong conclusions on how scientific publishing should change.

For those of you who have missed the story, Science published a case
control study in September that showed a strong link between chronic
fatigue syndrome and xenotropic murine leukaemia virus-related virus
(XMRV). (2)

The study got wide publicity and was very encouraging to the many people
who believe passionately that chronic fatigue syndrome has an infectious
cause.

Unfortunately, as Sudlow describes, the study lacked basic information on
the selection of cases and controls, and, worse, Science has failed to
publish E-letters from Sudlow and others asking for more information.

In the meantime, three other studies have not found an association between
chronic fatigue syndrome and XMRV. (3-5)

To avoid such poor reporting in the future Sudlow urges strengthening the
status quo-more and better prepublication peer review.

Not only is she trying to close the stable door after the horse has bolted
she has also failed to recognise the possibilities of the new Web 2.0
world.

The time has come to move from a world of "filter then publish" to one of
"publish then filter"-and it's happening.

Prepublication peer review is faith based not evidence based, and Sudlow's
story shows how it failed badly at Science.

Her anecdote joins a mountain of evidence of the failures of peer review:
it is slow, expensive, largely a lottery, poor at detecting errors and
fraud, anti-innovatory, biased, and prone to abuse. (6 7)


As two Cochrane reviews have shown, the upside is hard to demonstrate. (8
9) Yet people like Sudlow who are devotees of evidence persist in belief in
peer review. Why?

The world also seems unaware that it is scientifically dangerous to read
only the "top journals".

As Neal Young and others have argued, the "top journals" publish the sexy
stuff. (10)


The unglamorous is published elsewhere or not at all, and yet the evidence
comprises both the glamorous and the unglamorous.

The nave concept that the "top journals" publish the important stuff and
the lesser journals the unimportant is simply false.

People who do systematic reviews know this well.

Anybody reading only the "top journals" receives a distorted view of the
world-as this Science story illustrates.

Unfortunately many people, including most journalists, do pay most
attention to the "top journals."

So rather than bolster traditional peer review at "top journals," we should
abandon prepublication review and paying excessive attention to "top
journals."

Instead, let people publish and let the world decide.

This is ultimately what happens anyway in that what is published is
digested with some of it absorbed into "what we know" and much of it never
being cited and simply disappearing.

Such a process would have worked better with the story that Sudlow tells.

The initial study would have appeared-perhaps to a fanfare of publicity (as
happened) or perhaps not.

Critics would have immediately asked the questions that Sudlow asks.

Instead of hiding behind Science's skirts as has happened, the authors
would have been obliged to provide answers.

If they couldn't, then the wise would disregard their work.

Then follow up studies could be published rapidly.

Unfortunately, unlike physicists, astronomers, and mathematicians, all of
whom have long published in this way, biomedical researchers seem reluctant
to publish without traditional prepublication peer review.

In reality this is probably because of innate conservatism and the grip of
the "top journals" who insist on prepublication review, but biomedical
researchers often say "But our stuff is different from that of physicists
in that it may scare ordinary people. A false story, for example, "Porridge
causes cancer" can create havoc."

My answer to this objection is that this happens now.

Much of what is published in journals is scientifically poor-as the Science
article shows.

Then, many studies are presented at scientific meetings without peer
review, and scientists and their employers are increasingly likely to
report their results through the mass media.

In a world of "publish then filter" we would at least have the full paper
to dissect, whereas reports in the media even if derived from scientific
meetings, include insufficient information for critical appraisal.

So I urge Sudlow, a thinking woman, to reflect further and begin to argue
for something radical and new rather than more of the same.

1. Sudlow C. Science, chronic fatigue syndrome, and me. BMJ 2010;340:c1260

2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL,
et al. Detection of an infectious retrovirus, XMRV, in blood cells of
patients with chronic fatigue syndrome. Science 2009;326:585-9.

3. Van Kuppeveld FJM, de Jong AS, Lanke KH, Verhaegh GW, Melchers WJG,
Swanink CMA, et al. Prevalence of xenotropic murine leukaemia virus-related
virus in patients with chronic fatigue syndrome in the Netherlands:
retrospective analysis of samples from an established cohort. BMJ
2010;340:c1018.

4. Erlwein O, Kaye S, McClure MO, Weber J, Willis G, Collier D, et al.
Failure to detect the novel retrovirus XMRV in chronic fatigue syndrome.
PLoS One 2010;5:e8519.

5. Groom HC, Boucherit VC, Makinson K, Randal E, Baptista S, Hagan S, et
al. Absence of xenotropic murine leukaemia virus-related virus in UK
patients with chronic fatigue syndrome. Retrovirology 2010;7:10.

6. Godlee F, Jefferson T. Peer Review in Health Sciences. 2nd ed. London:
BMJ Books; 2003.

7. Smith R. Peer review: A flawed process at the heart of science and
journals. J R Soc Med 2006;99:178-182.

8. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review
for improving the quality of reports of biomedical studies. Cochrane
Database of Systematic Reviews 2007, Issue 1. Art. No.: MR000016. DOI:
10.1002/14651858.MR000016.pub3

9. Demicheli V, Di Pietrantonj C. Peer review for improving the quality of
grant applications. Cochrane Database of Systematic Reviews 2007, Issue 1.
Art. No.: MR000003. DOI: 10.1002/14651858.MR000003.pub2

10. Young NS, Ioannidis JPA, Al-Ubaydli O, 2008 Why Current Publication
Practices May Distort Science. PLoS Med 5(10): e201.
doi:10.1371/journal.pmed.0050201

Competing interest: RS is on the board of the Public Library of Science and
an enthusiast for open access publishing, but he isn't paid and doesn't
benefit financially from open access publishing.
 

oerganix

Senior Member
Messages
611
Likes
9
For 12 years Dr Wakefields opponents were unable to refute his science, even with the heroically corrupt Poul Thorsen on their side. Finally they resorted to nitpicking technicalities, which led to the ridiculous retraction of that paper.

Truly a black mark on the history of medical science.

Discredited....please! What researcher is not getting paid? The WPI study is certainly discredited because Dr. Mikovits gets a salary.
Yes, another hatchet job on good science by BigPharma. Of course patients parents paid for the research! No one else had the ovaries to do it and defy all the corporate propaganda. So, will Science eventually retract the WPI research paper because Andrea Whittemore's parents paid to get the Institute up and running? Geeez, I hope Science has more ethics and economic independence than either the BMJ or the Lancet.
 

Esther12

Senior Member
Messages
13,774
Likes
28,350
Ed: Note Richard Smith is a former editor of the BMJ


http://blogs.bmj.com/bmj/2010/03/22/richard-smith-scrap-peer-review-and-beware-of-“top-journals”/


Richard Smith: Scrap peer review and beware of "top journals"
Sounds totally dismissive of the Science paper - but I'm really not too sure why. It sounds like people are rubbishing it primarily because of the failed replication attempts, but then acting as if it was clear from the initial publication that it was poor science. The problems with the initial paper don't seem terribly significant, and if it does turn out to have been totally wrong, I don't think we know why or how yet.

re Wakefield: Surely his undeclared interests were bad? I really don't know much about this, but it does seem that there were problems with his piece.