Discussion in 'General ME/CFS News' started by Firestormm, May 2, 2014.
US Department of Health and Human Services
AHRQ Agency for Healthcare Research and Quality
From a quick scan through this, what strikes me most is an apparent contradiction - and some confusion - between the Background and Objectives section and the 'Picots' for the Key Questions. From the Background section (with my emphasis added):
Despite this statement, the list of Interventions to be analyzed appears to include only CBT, GET, and alternative therapies, plus 'symptom-based management', and although it doesn't exclude treatments 'intended to treat the underlying cause of the disease' (which the intro summarizes as including 'immune modulators...and antiviral and antibiotic medications', neither does it explicitly include them:
But confusingly, immune modulators are listed in the 'Includes' under 'symptom-based medication management', whereas in the 'background and objectives' section they are classed under the category of 'intended to treat the underlying cause of the disease' as contrasted with 'those targeting specific symptoms'.
So it isn't clear to me whether they will even assess evidence regarding treatments 'intended to treat the underlying cause', such as antivirals and antibiotics, and I don't understand why they have changed immune modulators from 'intended to treat the underlying cause of the disease' to 'symptom-based medication management'.
I'm also puzzled by the CFSAC advisory about this which I received in my email today:
As far as I can recall the entire process for establishing these questions was entirely closed, and we weren't even able to discover who the people were who were nominated by the department to come up with this protocol. There has been no opportunity for public comment on, or input into, the protocol, as far as I am aware, so the claim that "The patient perspective has also been represented and provided input throughout these processes" strikes me as misleading at best. I'm curious to know just how the 'patient perspective' was supposedly represented and provided input. Am I missing something here?
Jeanette Burmeister's "P2P review protocol: still no transparency"
I have started another thread on this here. @Kina or @Sushi, can you please merge it here? Thank you.
Jennie Spotila's analysis : "Protocol for Disaster" http://www.occupycfs.com/2014/05/02/protocol-for-disaster/
This is a great analysis and worth reading in full!
Did they really find a patient willing to sign off on that? Where did they get them from?
The patient and expert opinion did not sign off on this (except maybe the questions). They were supposed to have some kind of input at the beginning, then the Workshop Panelists at OHSU took it from there.
ETA: they were supposed to get to sign off on the questions, but apparently not
Looks like madness to me. Repeating the same mistakes and expecting a different outcome.
The overall review etc is not a bad idea, it is just that their consistent unwillingness to listen to patients or even comprehend the importance of transparency suggests that the process will have unrepresentative results.
Why, I am surprised at you Esther12! [Joking, sarcasm.] There are lots of chronic fatigue patients around ... lots of depressed people, unfit people, anemic people etc. Easy to find.
More seriously, I suspect that participation in some of the process is being misrepresented to suggest more in-depth participation.
We have had research railroaded by government incompetence how many times? There are reasons we insist on transparency and participation.
You could well be right... one would hope that this would prompt those involved to speak out publicly about it.
Yes, I suspect this is the case as well.
In addition to which, having exactly one patient as "patient representation" is insufficient. They should have a senate of patients.
Jennie Spotila has given permission to repost her assessment of the P2P study protocol here in its entirety. (Thank you Ms. Spotila.)
Please post comments on her blog http://www.occupycfs.com/2014/05/02/protocol-for-disaster/
As concerned as we are about IOM, P2P looks far, far worse!
Protocol for Disaster?
May 2nd, 2014 Jennie Spotila Leave a comment Go to comments
The study protocol for the systematic review of ME/CFS was posted by the Agency for Healthcare and Research Quality yesterday. It’s a recipe for disaster on its own, and within the broader context of the NIH P2P Workshop it’s even worse. Let me show you some of the reasons why.
Remind Me What This Is
The systematic evidence review is the cornerstone of the P2P process. The P2P meeting on ME/CFS will feature a panel of non-ME/CFS experts who will produce a set of recommendations on diagnosis, treatment, and research.
Because the P2P Panel members are not ME/CFS experts, they need background information to do their job. This systematic evidence review done by the Oregon Health & Science University under contract to AHRQ will be that background information. The systematic evidence report will be presented to the Panel in advance of the public P2P meeting, and will be used to establish the structure of the meeting as well.
The systematic review is the foundation. If done correctly, it would be a strong basis for a meaningful workshop. If done poorly, then everything that follows – the workshop and the resulting recommendations – will crumble. Based on the protocol published yesterday, I think “crumble” is putting it mildly.
The Key Questions
You can’t get the right answer if you don’t ask the right questions. (Dr. Beth Collins-Sharp, CFSAC Minutes, May 23, 2013, p. 12)
As I wrote in January, the original draft questions for the evidence review included whether CFS and ME were separate diseases. That question is GONE, my friends. Now the review is only looking at two things:
What methods are available to clinicians to diagnose ME/CFS and how do the use of these methods vary by patient subgroups?
What are the benefits and harms of therapeutic interventions for patients with ME/CFS and how do they vary by patient subgroups?
These questions are based upon a single and critical assumption: ME and CFS are the same disease. Differences among patient groups represent subtypes, not separate diseases. The first and most important question is whether the ME and CFS case definitions all describe one disease. But they’re not asking that question; they have already decided the answer is yes.
The study protocol and other communications from HHS (including today’s CFSAC listserv message) state that the P2P Working Group refined these study questions. The implication is that since ME/CFS experts and one patient served on the Working Group, we should be satisfied that these questions were appropriately refined. But what I’m piecing together from various sources indicates that the Working Group did not sign off on these questions as stated in the protocol.
Regardless of who drafted these questions, they cannot lead to the right answers because they are not the right questions. And when you examine the protocol of how the evidence review will be conducted, these questions get even worse.
The real danger signals come from the description of how this evidence review will be done. The issue is what research will be included and assessed in the review. For example, when asking about diagnostic methods, what definitions will be considered?
This evidence review will include studies using “Fukada [sic], Canadian, International, and others“, and the Oxford definition is listed in the table of definitions on page 2 of the protocol. That’s right, the Oxford definition. Oxford requires only one thing for a CFS diagnosis: six months of fatigue. So studies done on people with long-lasting fatigue are potentially eligible for inclusion in this review.
The description of the population to be covered in the review makes that abundantly clear. For the key question on diagnostic methods, the study population will be: “Symptomatic adults (aged 18 years or older) with fatigue.” There’s not even a time limit there. Three months fatigue? Four? Six? Presence of other symptoms? Nope, fatigue is enough.
There is a specific exclusion: “Patients with other underlying diagnosis,” but which conditions are exclusionary is not specified. So will they exclude studies of patients with depression? Because the Oxford definition does not exclude people with depression and anxiety. We’ve seen this language about excluding people with other underlying diagnosis before – and it results in lumping everyone with medically “unexplained” fatigue into one group. This protocol is set up to result in exactly that. It erases the lines between people with idiopathic chronic fatigue and people with ME, and it puts us all in the same bucket for analysis.
And what about the key question on treatment? What studies will be included there? All of them. CBT, GET, complementary/alternative medicine, and symptom-based medication management. It’s not even restricted to placebo trials; trials with no treatment, usual care, and head-to-head trials are all included.
Let’s do the math. Anyone with unexplained fatigue, diagnosed using Oxford or any other definition, and any form of treatment. This adds up to the PACE trial, and studies like that.
But it’s even worse. The review will look at studies published since January 1988 because that was the year “the first set of clinical criteria defining CFS were published.” (page 6) Again, let’s do the math: everything published on ME prior to 1988 will be excluded.
Finally, notice the stated focus of the review: “This report focuses on the clinical outcomes surrounding the attributes of fatigue, especially post-exertional malaise and persistent fatigue, and its impact on overall function and quality of life because these are unifying features of ME/CFS that impact patients.” (page 2) In other words, PEM = fatigue. And fatigue is a unifying concept in ME/CFS. Did anyone involved in drafting this protocol actually listen to anything we said at last year’s FDA meeting?
Maybe you’re thinking it’s better for this review to cast a broad net. Capture as much science as possible and then examine it to answer the key questions. But that’s not going to help us in this case.
This review will include Oxford studies. It will take studies that only require patients to have fatigue and consider them as equivalent to studies that require PEM (or even just fatigue plus other symptoms). In other words, the review will include studies like PACE, and compare them to studies like the rituximab and antiviral trials, as if both patient cohorts were the same.
That assumption – that patients with fatigue are the same as patients with PEM and cognitive dysfunction – is where this whole thing falls apart. That assumption contaminates the entire evidence base of the study.
In fact, this review protocol makes an assumption about how the Institute of Medicine study will answer the same question. It is possible (though not assured) that IOM will design diagnostic criteria for the disease characterized by PEM and cognitive dysfunction. But this evidence review is based on an entirely different patient population that includes people with just fatigue. The conclusions of this evidence review may or may not apply to the population defined by the IOM. It’s ridiculous!
But it’s the end use that really scares me. Remember that this systematic evidence review report will be provided to that P2P Panel of non-ME/CFS experts. The Panel will not be familiar with the ME/CFS literature before they get this review. And the review will conflate all these definitions and patient populations together as if they are equivalent. I think it’s obvious what conclusion the P2P Panel is likely to draw from this report.
I would love to be wrong about this. I would love for someone to show me how this protocol will result in GOOD science, and how it will give the P2P Panel the right background and foundation for the recommendations they will draft. Please, scientists and policy makers who read this blog – can you show me how this protocol will produce good science? Because I am just not seeing it.
What Do We Do?
This protocol is bad news but it is by no means the last word. Plans are already in motion for how the advocacy community can respond. I will keep you posted as those plans are finalized.
Make no mistake, this evidence review and P2P process are worse than the IOM study. We must respond. We must insist on good science. We must insist that our disease be appropriately defined and studied.
I just added this comment on the blog:
I don't think you are wrong about this in general. The specifics will elude us until we can see it in 20-20 hindsight though.
One huge issue is that evidence based medicine and review is not science. Its an attempt to translate science into clinical practice using managerial processes. Its rubber stamping and bean counting. So when a review is formed its based on technical requirements, and typically on an assumption that the underlying science is sound ... it passed [peer] review didn't it? Its often an RCT, which is the highest standard of evidence, is it no[t]?
Yet we know RCTs can be very wrong, and further that meta-studies can suffer from GIGO, garbage in, garbage out. If the scientific methodology is not being examined, and evidence based reviews simply don't do this as a rule, then its all accepted at face value.
Any time you see a committee its translational medicine, or translational science, not really science. It thus has other criteria than scientific ones which direct the outcome.
Psychogenic medicine, including the PACE trial, has so many flaws I cannot credit it as being science. So what is an RCT for nonscience? Does it have the same credibility as a quality scientific study? I don't think so.
Rather than go into the huge range of flaws in psychogenic medicine including PACE, about which I am writing a book, I just want to focus on one single thing here: subjective versus objective evidence. The arguments from psychogenic medicine usually come down to subjective evidence, evidence that is inherently less reliable and more subject to bias than objective evidence.
Every time they have engaged in obtaining objective evidence it has run counter to their hypotheses. They usually avoid it like the plague, though it is likely this is just an entrenched flaw in methodology in this area. Yet this evidence, and [the tiny and] the tiny effect sizes that may be entirely due to bias, are judged alongside highly objective studies that have hard evidence to back them.
It gets worse. Lets take, for example, Lerner's work on antivirals. Its a case series, not an RCT. So the quality is low, right, it should be ranked lower than RCTs? Yet mechanisms exist within EBM to allow for upgrading of evidence under certain conditions, including large effect sizes. So a case series could easily be ranked as high as an RCT. Yet that is not all.
RCTs that are of lower reliability can be downgraded. I would automatically downgrade any RCT based on psychogenic hypotheses by one or two rankings, simply due to methodological issues, and then examine them for reasons to downgrade further.
In cursory bureaucratic reviews, nobody is going to look too hard at subjective versus objective, or methodological flaws, or upgrading or downgrading evidence rankings, especially on a tight budget. So the evidence base will be rubber stamp evidence, based on technical criteria, unless highly developed mechanisms are in place to prevent this. Somehow I doubt this is the case.
[Editing in brackets, doh. This is not on the post on Jennie's blog.]
Im not at all surprised at this as this is what I was expecting would happen. I note too that in the symptomatic treatments that Florinef isnt even mentioned. I guess they want to just focus on "fatigue" and not that we get things with ME such as POTS.
I do think that things are heading for the worst.. with current review being done.. they can use this as an excuse not to do another for a long time... it helps to keep us buried for longer.
I guess this is all one way to hide the "harms" of certain therapeutic intervention as the "fatigue" group generally wont get sicker with GET so that will greatly decrease the amount of harms when they look at the studies of that group and combined with the rest. There hasnt been equal studies with some people done on ME people (actually has there even been a ME and GET study done at all?).
If they had any common sense they'd play to safe and not make huge possibly problematic "assumptions" by throwing them together as one. Its an asssumption as they certainly do not know it to be the case.
Im wondering about that too. Have they once again "twisted" things to look how they actually are not.
Yeah we are so screwed
I guess some want those outbreaks in history to vanish! as it complicates the ME/CFS stuff for them. This is one way of doing that. Dont look at anything on the ME outbreaks, out of sight, out of mind and hence no need to even consider if anything being written now makes sense in regards to those too.
The ME group already has a very weak stance due to being outnumbered by the fatigue group in studies (some studies say that only 1:6 of those with CFS may have ME if we are looking at the stats). So already any ME evidence is way weakened even before making the past history of ME outbreaks and anything written around these or studies done on this patient group vanish.
Maybe IOM set up from the start to have a certain outcome so hence they do not expect any other. They made sure there are more non ME experts on the panel then experts etc. I personally dont trust the IOM panel hasnt been rigged so that the ME experts get out voted on things or if anyone has to agree on something, there may be one or two on that panel put there to make sure certain ME things dont end up as the outcome. So yeah I do think the DHHS already know the outcome of the IOM.
I dont trust anything the govs esp American and English do around the subject of ME (or CFS). They have never wanted to take this illness seriously and I can only wonder why.
This appears to be the principal investigator for the systematic evidence review by the Oregon Health & Science University.
PI: Beth Smith, DO
Submitted through EPC (The Pacific Northwest Evidence-based Practice Center)
AHRQ (Agency for Healthcare Research and Quality
Title: Diagnosis and Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome
The purpose of this project is to conduct a systematic review of the scientific literature on the diagnosis and treatment of ME/CFS, answering specific key questions for the NIH Office of Disease Prevention.
You can also try a Google Site Search
Separate names with a comma.