There have been many models of ME and CFS over the years. There are a number of issues that seem underemphasised that I wish to make explicit.
This is not a criticism of any of these models, its an analysis of issues common to many models.
My first model was made about 1998, involving hypercitricemia. I presented it at the 1999 Sydney CFS conference as a poster. By 2000 it was apparent it was wrong, but by that point I had already moved on to the second version of my model. In 2002 I came up with a third version. Both the second and third versions remain unproven, but are also not disproved.
My later models involved intracellular ionized calcium, so it is still relevant to current models involving acetylcholine, nitric oxide, mitochondrial dysfunction, cellular hormone balance and vitamin D. This blog is not about my old models however. I just wanted to give some perspective on where I was coming from.
The first most obvious issue is with definitions. This is so obvious to most that its almost not worth mentioning, but I think it should be here for completeness.
When developing models its important to be aware of what definition is used in the research if its from a CFS paper. The Oxford definition is so vague that none of the data is reliable. The CDC Empiric or Reeves definition is not much better. Fukuda was the international standard for a decade and a half, and it may still be that standard, but Fukuda himself stressed it was only a temporary definition, and subgrouping would be required.
The Canadian Consensus Criteria (and the revised version) are the best definitions for CFS. The International Consensus Criteria for Myalgic Encephalomyelitis is the latest ME definition, and appears to be received well by CFS and ME researchers.
Data from under one definition is not necessarily relevant to patient groups defined under another.
Ladder of Evidence
An analogy I developed about five years ago for reviewing one of the popular models of today I call a ladder of evidence. Essentially there are different standards of evidence for claims made about a model. I am only presenting three steps here, but there can be many rungs to the ladder, but to show that I would have to write about a specific model, and that is not the point of this blog.
The first ladder step is basic: are the chemical pathways involved in the model real? Without this its just a speculative hypothesis - which does not make it wrong, it just means it needs more evidence to be taken seriously. Most models fit this criteria. Many modellers have taken existing biochemical pathways, existing pathogenesis, and developed their models accordingly.
The second rung on the simplified ladder is relevance. Is there enough data to show the model involves pathways that are relevant to ME or CFS pathology? In most cases these models do indeed show that. There is not much doubt that oxidatives stress, glutathione, NO, H2S, some B vitamins and precursors, cytokines and natural killer cell function are involved in ME or CFS. This is important because it verifies that the model is worthy of further study, and worthy of consideration as alternative explanations when new data becomes available.
The third rung is the one they all fail on. Is there sufficient evidence to show the model involves not just pathology but causation? This is hard to do. It is not sufficient that treatment based on the model causes some improvement in some or many patients. Its not enough that some patients have full symptom remission. To show causation a series of studies would have to show both that the postulated critical pathways are driving the pathology, and that highly specific intervention results in recovery. In the case of a causal pathogen this would involve demonstrating that pathogen clearance leads to full remission.
Subgroups are important here. One or more models might be applicable to a specific subgroup only. Identifying that subgroup, finding critical markers, is essential to dealing with the model. No one treatment works on everyone to date, its very important to find out who it works on and what markers they have.
Let me give you a taste of the range of possibilities that we have to deal with:
1. None of the models are causal, they are all about peripheral pathophysiology or simply wrong.
2. None of the models are causal, but several describe important pathophysiological mechanisms that offer intervention opportunities as therapy not cure.
3. Several of the models show partial mechanisms that cause ME or CFS, but they are incomplete and require other models, other mechanisms, or as yet to be discovered mechanisms to be complete.
4. Some of the models deal with specific subgroups, there is different causation in each of these subgroups so multiple models address causation, just in different people.
5. One model is fully causal. Correcting key pathways on this model result in full remission. No model is at this point yet.
Again this is a spectrum, and there are many intermediary stages that could be listed.
Issues with Causation: Subgroups
The biggest issue is subgrouping. There are a range of possibilities.
For example, broad CFS could have multiple variations, its all the same disorder/disease but with varying complications and severity. (This does look unlikely though, as it does not allow for numerous rare or undiscovered genetic diseases as just one example, and high levels of misdiagnosis are almost certainly occuring under the weaker CFS definitions.)
At the other end of the spectrum I am wondering if ME is one disease or two? The Rituximab study on CFS (CCC) shows that two thirds are responders, one third non responders. The Lights research shows at least two different kinds of response post exercise. So it is possible we are talking about two different but very similar diseases, or again it could be one disease with two different complications, or even a spectrum of issues.
Broad inclusive studies can be beneficial if they use adequate methodologies. In principal this even includes studies using the Oxford definition. What is required is that every single patient is classified multiple ways, under multiple definitions. The study would then analyse its results separately for each group. This way it can validate or invalidate specific definitions, optimize research to relevant subgroups, and so on. Unfortunately nearly all studies using loose definitions of CFS fail to even slightly address this opportunity, perhaps because of resource issues, or limited numbers of patients, or perhaps because this opens their study up to complete refutation and they are not willing to have their model discredited. Studies using more specific definitions also fail to capitalize on this issue, its not just studies using the loose definitions.
Let me put it another way: the researchers tend to be either lumpers or splitters: group everyone together or isolate more specific groups. I have the view we need both lumpers and splitters, although the quality of the research to date appears to be much higher with the splitters as they have a less heterogenous group to study.
Another issue with subgroups is about illness severity and duration. Too many studies use only mild or moderate patients who have only been sick a few years. These patients are easy to enrol, easy to study, but they miss a very important opportunity. Long term patients and severely disabled patients are likely to more clearly show core pathophysiology. Not studying them is easy and cheap, but likely to produce substandard results.
On this point I would like to emphasize that I think the trend to post-activity studies, and in particular exercise test-retest studies are an important step toward improving ME and CFS research.
Issues with Causation: Therapy
One of the tendencies over the years is to consider improvement from treatment based on a specific model as evidence of the correctness of the model. It might indeed do so, but it might also be only evidence for the model to be addressing some secondary pathophysiology.
One treatment protocol I looked at some years ago had a large number of adherents. I wont mention the name of the protocol, this is only used as an example and I do not want to start criticising specific models. A good number of patients improved, some a lot, and a good number failed to improve. The improving patients became convinced the model was correct. I was invited to examine the biochemistry and I did this, and found a glaring hole at the core of the model. It was wrong. This did not mean the treatment was a fraud, it meant we did not understand why it was working, and therefore could not reliably identify who would benefit and who would be nonresponders. This lack of understanding also meant that the therapy could not be improved - if you don't know why something is working, tinkering with it is hit or miss.
Another example involved a model of CFS causation and cure: yes, thats right, it was claimed to be a cure. This followed the complete recovery over several years of someone who had been diagnosed with CFS. They developed their own theory, their own treatment plan, and they recovered. To them it meant they had THE cure. First, it was one individual. We have no idea what was really wrong with them. Second, their model chemistry was flawed. A key component of their model was chemically impossible - it just looked good if you didnt know anything about chemical bonds. Now there was another hypothesis some years prior that would have plugged this gap, it wasn't really a new model, but the author of this model didn't want to know. This model involved sulphur chemistry, and B vitamins, back in the 90s. More recent models involving sulphur and B vitamins are much more sophisticated and more in touch with the chemistry than this one was, but I have to wonder if the treatment protocol had some value even if the model was wrong.
Pragmatic Issues - Treatment versus Understanding
Now a big issue is this: if the model is flawed, but the treatment works, thats good enough to start with. We all want a cure, or failing that a fantastic treatment: if the underlying model is flawed, but the treatment works, then follow-up study will eventually reveal the real explanation, and second generation treatments can be developed. A model being flawed is not the end of the story. Give us all remission and we can figure out the details later!
In this case it is much more likely that the model is incomplete rather than flawed, although components of the model could be flawed without invalidating the entire model.
Why is modelling important?
I deliberately did not use names of models here, I don't want to get drawn into specifics at this time. I personally think this kind of modelling is important for several reasons, if I didn't I wouldn't have made my own models.
A model is about possibilities. They are hard to prove. Getting the possibilities out there means that researchers can become aware of them. As evidence accumulates certain models will rise in importance. Researchers can then design studies to take the possibility of some models into account, or use those models in discussing the relevance of their data.
Models are empowering. They offer potential therapeutic interventions. Many of these are a collosal waste of money, but they give us hope. I can survive not being well, but being unwell and without hope is not a place I want to be.
Models can grow, and integrate with other models. Many of the existing models have overlapping chemistry. As we understand more these models will grow, and will merge, giving us more detailed testable models.
Models can show pathophysiology. Even if a model is not about causal mechanisms, it can highlight abnormal chemistry and chemical regulation that can be a target of intervention. This can lead to successful treatment, as many patients are aware already.
One or more models of causation will eventually be shown to be correct. How many depends on how many types of ME and CFS there are. Identification of specific causal mechanisms allows targeted therapy, and attracts serious research dollars.
Modellers and research scientists interact: indeed, many of the modellers are research scientists themselves. Models become a sounding board for possibilities, stimulate lateral thinking, and offer opportunities that could be missed.
Models are important, but they are only a part of the entire effort.
One last comment: I would like to say thank you to everyone who has put time and effort into modelling aspects of ME and CFS pathology or causation. Your work is appreciated, at least by me.
Issues with ME Models
Blog entry posted by alex3619, Dec 14, 2011.
About the Author
I am a long term ME patient with many complications. While I have pushed research advocacy since 1993, I became political around 2009. My current project is a book called "Embracing Uncertainty". Uncertainty in medical science seems anathema to too many doctors. "I do not know" is something more doctors should be honest about.