• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Cochrane Review protocol: Exercise therapy for chronic fatigue syndrome (individual patient data)

Bob

Senior Member
Messages
16,455
Location
England (south coast)
So they're nearly all heavily involved in related research. Some or most of them have professional interests in exercise therapy because they are paid to provide services, and some/many of them have built a career on exercise therapy and related therapies. Two (?) of them coauthored the Oxford criteria that they'll be using for the review.

Next, I think we should have a look through their most recent research papers or any online profiles and find any declared or undeclared professional conflicts of interest such as consultancy contracts and employment with the medical insurance industry etc. (And, of course, we need to list the source of any info that we find.)

Later today I could set up a Google docs page where we can start systematically listing the details for each author, if anyone thinks that's a good idea. Once we have a file with COI information on all the authors, we can submit it to Cochrane. Perhaps it might also help to try to get some high profile academic names signed up to a letter to submit to Cochrane.
 
Last edited:

A.B.

Senior Member
Messages
3,780
The strongest point we can make is that objective evidence for underlying pathology exists, in particular PEM as shown by the Stevens protocol. So it's a physical illness, and therefore research needs to be held to certain standards. Until the GET/CBT crowd can prove that their approach can correct those pathologies they have absolutely nothing, and we need to stress this point.

Another important point is that according to other posters here, the CBT/GET crowd made some earlier studies where actometers or similar devices were used but showed no improvement.
 

biophile

Places I'd rather be.
Messages
8,977
@Bob. It goes further than being the authors of many of the reviewed studies and declared COI. It may be worth adding that several of the authors are currently embroiled in controversy over GET and have a stake in the outcome.
 
Last edited:
Messages
15,786
@Bob. It goes further than being the authors of many of the reviewed studies. It may be worth adding that several of the authors are currently embroiled in controversy over the review therapy (GET) and have a stake in the outcome.
It's not even a matter of having a stake - their research shows that nearly all of them have already made up their minds regarding GET. They already have an answer to the question which they are supposed to be investigating, before they even get started.

Aside from one member, they are not independent judges of the data. They are essentially lawyers who are all arguing the same side of a case. And as such, they'll collect data which supports their stance, and focus on creating arguments which discount any opposing evidence. And of course they will judge their own beliefs to be the correct ones!

Their pre-existing strongly-held opinions, as documented in their own research, is overwhelming proof that they are not unbiased regarding GET for CFS. Affiliations and profits might be useful in some situations for proving bias, but bias is already proven here even without those things.

Not that I think we shouldn't look for further indications of bias (we should!) but it shouldn't even be necessary. What on earth was the Cochrane Collaboration thinking in letting this group take on this project in the first place?!?
 
Last edited:

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
OK, I've set up an editable Google Docs page, please 'like' this post, or PM me, if you want to help create the document, and I'll send you a link.

Do you think this development - and your work here - might result in an article? I think there is a lot of interest in this review, but it's hard to understand its significance and an article that explains matters succinctly, would be of benefit to patients I think. If someone were willing/able to write something that is. Thanks.
 

biophile

Places I'd rather be.
Messages
8,977
http://www.bmj.com/content/315/7109/672

Several tips on where things can go wrong, including:

Experts, who have been steeped in a subject for years and know what the answer "ought" to be, are less able to produce an objective review of the literature in their subject than non-experts.[5,6] This would be of little consequence if experts' opinions could be relied on to be congruent with the results of independent systematic reviews, but they cannot.[7]

full-text for those without BMJ access:

http://www.vhpharmsci.com/decisionm...-systematic reviews and meta-analyses-BMJ.pdf
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
The strongest point we can make is that objective evidence for underlying pathology exists, in particular PEM as shown by the Stevens protocol. So it's a physical illness, and therefore research needs to be held to certain standards. Until the GET/CBT crowd can prove that their approach can correct those pathologies they have absolutely nothing, and we need to stress this point.

Another important point is that according to other posters here, the CBT/GET crowd made some earlier studies where actometers or similar devices were used but showed no improvement.

Yes, this is about pushing for objective outcome measures. They exist, but until now very few have used them. These need to become standard research practice in interventions. However the 2 day cpet will add thousands of dollars in cost for each patient, considerably increasing the cost of studies, or limiting their size even further. This is about funding too. Those applying for grants need to stress that they will be using objective outcome measures, not the subjective ones from past studies. However there have only been a small handfull of non-psychiatric intervention studies which this would apply to.
 
Messages
32
My apologies if this has already been covered.

The review fits perfectly into plans laid some 9 years ago. Three (at least) of the authors were present at a meeting described by Williams and Marshall in Proof Positive: http://www.meactionuk.org.uk/proof_positive.htm

I recommend reading the whole document and others prepared by Williams/Marshall/Hooper, especially 'Magical Medicine'. Proof Positive includes:

"Wessely said: “Mansel Aylward, you are involved with policy definitions. What have you heard here that might influence your Secretary of State?”

Aylward said: “I have been given a lot of information that reinforces some of the messages that I have passed on to decision makers. We had some great difficulty last year persuading certain people that the way forward in the more effective assessment of disability and its management in people on State benefits lay more with a biopsychosocial approach. There seems to be an antipathy in some parts of Government towards anything without a hard evidence base. If the biopsychosocial approach is perceived in (such a) way, it is very difficult to get the Department of Health, amongst others in Government, to favour interventions and rehabilitation adopting the biopsychosocial approach. But in recent months I’m beginning to see a change”.

Wessely: “What made some of the policy makers change their views?”

Aylward: “Systematic reviews of the literature garnering evidence to support the biopsychosocial concept. Recent meetings of focus groups of key opinion makers (now) support ---with authoritative and expert opinion --- the value of biopsychosocial approaches. There are going to be some developments soon. The key aspect has been effectively communicating this in a far more robust and authoritative way”.

It is noted here that Aylward used the words expert “opinion”, not expert “evidence”. "
 

Sidereal

Senior Member
Messages
4,856
This study will end up being a big problem for the ME community in the future, worse than the Lancet PACE paper. I apologise in advance for the long post.

Systematic reviews and meta-analyses of the literature are considered the gold standard of "evidence based medicine" by doctors and policymakers. If this meta-analysis shows that CBT/GET are effective treatments for CFS - and it will most certainly show that for reasons I will explain below - it will be used to further enforce the pragmatic rehabilitative "treatment" approaches based on the biopsychosocial model. No one will read the actual paper to discover that these studies used laughable inclusion criteria, dodgy self-reported outcome measures (questionnaires of subjective "fatigue") and that where objective outcomes were used, there was no improvement in actual physical activity of patients and no improvement or indeed worsening of reliance on state disability payments. The efficacy of GET will be enshrined in what we will be told in doctors' offices is the highest level of evidence according to the hierarchy of evidence in evidence based medicine - a meta-analysis of individual patient data.

In normal circumstances, a meta-analysis of individual patient data (IPD) is the highest quality type of meta-analysis because what it does is it collates actual raw data for each individual patient who took part in the primary studies and includes those patient-level data into one big analysis. This technique increases statistical power and improves reliability of the findings. You might be surprised to hear that IPD meta-analyses are infrequently done. Most meta-analyses rely on the published reports to extract data from and they use statistical techniques to pool data from group means and standard deviations from various studies (in other words, most meta-analyses collate statistical averages from the published studies). This approach has a number of limitations of course and various statistical techniques have been developed in order to deal with them. An individual patient data meta-analysis avoids many of these pitfalls but it is not normally done because it requires obtaining actual raw datasets from researches who conducted the primary studies. Researches generally do not want to share their raw data with people doing meta-analyses. This group, however, is in a position to do an IPD meta-analysis since they carried out some (most?) of the studies that will be included so of course they won't need to hand the datasets over to an independent group doing the meta-analysis or rely on many other groups to send them the data.

If IPD meta-analysis is the gold standard, why am I so worried about this being done on CFS? Well, this technique increases the statistical power to detect very small effect sizes and when you collate data from like 1000 patients, those trivial effect sizes just become amplified. This can turn a bunch of negative primary studies and a handful of weakly positive studies into a statistically significant effect.

There are published examples of this in other fields. I remember reading this meta-analysis a few years ago and shaking my head. :rolleyes: Basically, a bunch of negative studies showing no effect of lamotrigine (an anticonvulsant drug) on depression in bipolar disorder suddenly becomes an effective treatment according to an individual patient data meta-analysis. How many individuals in possession of a prescription pad are even capable of appraising this paper or even know what a relative risk of 1.27 means? Needless to say, in the real world, this treatment has no efficacy for depression for most people.

Same with CBT/GET for "CFS". Some studies show a modest effect, some find it utterly useless like the FINE trial where not even statistical significance could be detected, if I recall correctly, let alone clinical significance. This meta-analysis will collect all that trash in one heap and get an effect out of it thanks to statistical magic.

Cui bono?

We live in very strange times. Just the other day I watched the presentation Dr Van Ness did in the UK recently that justy posted on another thread. We now have objective physical evidence that aerobic exercise is useless or harmful for bona fide ME patients. In severe ME patients, as all of us who have been there know, exercise is quite literally torture. Yet we have these two paradigms coexisting at the same time. One, supported by psychiatry, psychology, the state and vested financial interests, is telling nurses and doctors to put their ME patients on exercise machines on the basis of some poor quality psychiatric studies. Meanwhile, there is this research going on in parallel that is showing with objective tests like CPET that the aerobic system is literally broken in this illness. It's just surreal.

By the way. Patients and doctors (doctors are not scientists contrary to what many people think) often say how we need big studies with lots of patients etc. Actually, big studies are only needed where the treatment is not very good and produces a small effect which is difficult to detect in a small sample. (Treatment harms are a different matter. You may not know a treatment is harmful until it's unleashed on the general population (i.e. large numbers of people), SSRI antidepressants for example, because if the frequency of adverse events is low, then a randomised controlled trial will often not be able to detect those harms.) However, when it comes to efficacy of treatments, treatments that obviously work like parachutes which are known to prevent death and injury when jumping from heights (great parody in BMJ here), doing big multicentre randomised controlled trials is stupid, harmful and unethical.

A treatment as worthless as CBT/GET needs a meta-analysis like this to show an effect.

"If a treatment has an effect so recondite and obscure as to require meta-analysis to establish it I would not be happy to have it used on me." - H.J. Eysenck :rofl:
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Let me stress again, any statistical analysis is about determining the probability that something is due to chance. Results that are biased, or due to poor methodology, do not have to be by chance. The results can be anything up to highly significant, but the effect sizes are more likely to be low. Outcome measure are just one critical factor that can influence bias.

A meta-analysis of a whole lot of highly biased studies can just confirm its not due to chance ... but does not rule out bias or poor methodology. Combining studies reduces one risk factor, sample size. It does not reduce others, and may introduce new risks.

These results are then translated to advice for clinical practice. That translation process can also be flawed.
 

user9876

Senior Member
Messages
4,556
By the way. Patients and doctors (doctors are not scientists contrary to what many people think) often say how we need big studies with lots of patients etc. Actually, big studies are only needed where the treatment is not very good and produces a small effect which is difficult to detect in a small sample.

I can see larger trials can be useful for ill defined illnesses where there may be multiple different biological pathways at work all causing similar clusters of symptoms. Say there are 13 different pathways and a treatment only works for 1 then in a small trial you stand a good chance of not getting anyone with that particular pathway thereby discounting a treatment that would work well for some people. But this is also where stats tend to fall down as well in that a mean effect would be small even though it is big for a few people.
 

biophile

Places I'd rather be.
Messages
8,977
IIRC, none of the GET studies so far have used objective outcomes to confirm the assumed increases in activity. This is astounding considering that the main and most controversial premise of GET is to increase total activity levels. At least 4 studies which combined GET with CBT did use actometers and found that there were no such increases. Previous reviews have excluded such GET studies from analysis because they were combined with CBT and therefore not pure GET. The CBT/GET study by Nunez et al published in 2011 (the one which found CBT/GET worsened SF-36 physical function and bodily pain scores in the intervention group) has been excluded from reviews for the same reason. 3 CBT/GET trials which used actometers did not publish that data until years after the first paper, so perhaps other trials have unpublished data too.

CBT/GET are commonly presented as therapies which challenge patients' supposedly irrational avoidance of activities by gradually increasing activity levels. We are told that this is safe because post-exertional symptomatology is primarily due to deconditioning rather than a disease process which contraindicates such exercise. However, current research demonstrates that CBT/GET does not overcome the activity ceiling, and other research is showing that there are abnormal biological responses to exercise which cannot be explained by deconditioning. I'm concerned that the misleading presentation of GET is going to be perpetuated with a review conducted by those who did the primary studies and started the misrepresentation.

Not to mention the lax case definitions, small effect sizes possibly being presented as more clinically significant than they really are, and conflicts of interest of the reviewers (almost all are GET proponents, some stauncher than others).
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I can see larger trials can be useful for ill defined illnesses where there may be multiple different biological pathways at work all causing similar clusters of symptoms. Say there are 13 different pathways and a treatment only works for 1 then in a small trial you stand a good chance of not getting anyone with that particular pathway thereby discounting a treatment that would work well for some people. But this is also where stats tend to fall down as well in that a mean effect would be small even though it is big for a few people.

This is an argument for large trials with subgroup analysis. I have written about this before. Its a good idea, but the cost means its unlikely to happen, particularly on our stupidly low budgets due to research funding disinterest. What might happen is identification of subgroup markers from broader research, and then using that in smaller trials. Otherwise its more likely to be chance that a proper subgroup is identified. It was chance that led to Rituximab being identified as a possible ME or CFS treatment.

Once we get a better grasp on pathophysiology, then hypothetical subgroups are likely to be specifically targeted for treatment, probably starting with small pilot studies.