• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

NICE guideline for ME/CFS is unethical – Dr Diane O’Leary, Kennedy Institute of Ethics | 23 August 2

Barry53

Senior Member
Messages
2,391
Location
UK
The test of materiality is whether, in the circumstances of the particular case, a reasonable person in the patient’s position would be likely to attach significance to the risk, or the doctor is or should reasonably be aware that the particular patient would be likely to attach significance to it
[My underline]
Patients fully informed would hopefully not choose to undertake CBT/GET. And medics fully informed of the facts would hopefully not recommend their patients do CBT/GET so in effect you are undermining the whole basis of CBT/GET and the NICE guidelines as they stand.
An additional point worth clarifying I think, is that the test of materiality excludes a doctor's unawareness as a reason for not adequately informing a patient, if the doctor should reasonably be expected to have that awareness.
 

Barry53

Senior Member
Messages
2,391
Location
UK
The NICE guidelines are not so much unethical as unjustified on medical evidence grounds.
But if the NICE guidelines are unjustified on medical evidence grounds, isn't that itself unethical? To "recommend" (in effect to enforce) medical treatments that have no justification, feels highly unethical to me.
 
Last edited:

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Now, many here have posted their complaints about some of the studies showing that CBT/GET is the best treatment, but those kinds of complaints assume that there are significant studies on each side of the issue: that there is a real scientific controversy.

Do you understand why blinding is so important in pharmacological trials?

You seem to be unaware of how unreliable unblinded studies without objective outcomes are and why they are not considered evidence of efficacy by the FDA. (for approval of a pharmacological therapy)

These CBT and GET studies aren't measuring anything meaningful, merely a bias in questionnaire answering behaviour (because there is no objective evidence of increased activity or neuropsychological functioning). Double blinding is used to control for this bias in pharmacological trials.

If these studies were drug trials, the drug would not be appoved due to lack of evidence. The scientific controversy is why there is a big double standard - methods that are not considered trustworthy for pharmacological trials or "alt-med" are considered trustworthy in psychology.
Stop pretending that unblinded studies (without meaningful objective outcomes) provide high quality evidence and answer the question we keep asking you: Why are rigorous scientific standards not applied to psychological therapies?
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
But if the NICE guidelines are unjustified on medical evidence grounds, isn't that itself unethical? To "recommend" (in effect to enforce) medical treatments that have no justification, feels highly unethical to me.

I think you can argue this both ways but my point was that it is a distraction to suggest that there is necessarily some unethical starting position involved in NICE's decision. They have failed to judge the evidence appropriately and that should be corrected, but that is not necessarily unethical.

As an example, when I was a junior doctor digitalis was given as standard treatment for heart failure. Around 1975 studies showed that digitalis only improves outcome if the heart failure is due to atrial fibrillation. In other cases it is more likely to worsen outcome. Things may have changed again since. Looking back, the evidence for digitalis being useful in general heart failure was probably based on weak evidence. But I do not think anyone would claim that its use in 1970 was unethical, just ill-informed. In the case of recommending CBT for ME/CFS one might argue that it would be unethical for NICE not to adequately inform itself. But NICE is not a person and if 'experts to hand' indicate that the evidence for CBT is good then it is hard to argue that whoever at NICE adjudicates is acting unethically.
 

Valentijn

Senior Member
Messages
15,786
They have failed to judge the evidence appropriately and that should be corrected, but that is not necessarily unethical.
Doing a poor job is unethical. At least, that's what we learned in law school ethics classes. It's not just taking the client's money which is earmarked for other things, or stealing documents from the opposition. It can also be a failure to put the care into the job which it requires for it do be done properly.
 

Cheshire

Senior Member
Messages
1,129
I think you can argue this both ways but my point was that it is a distraction to suggest that there is necessarily some unethical starting position involved in NICE's decision. They have failed to judge the evidence appropriately and that should be corrected, but that is not necessarily unethical.

As an example, when I was a junior doctor digitalis was given as standard treatment for heart failure. Around 1975 studies showed that digitalis only improves outcome if the heart failure is due to atrial fibrillation. In other cases it is more likely to worsen outcome. Things may have changed again since. Looking back, the evidence for digitalis being useful in general heart failure was probably based on weak evidence. But I do not think anyone would claim that its use in 1970 was unethical, just ill-informed. In the case of recommending CBT for ME/CFS one might argue that it would be unethical for NICE not to adequately inform itself. But NICE is not a person and if 'experts to hand' indicate that the evidence for CBT is good then it is hard to argue that whoever at NICE adjudicates is acting unethically.

I think there's a difference between a doctor applying bad guidelines without searching more evidence and a group of people writing the guidelines only checking abstracts.
 

user9876

Senior Member
Messages
4,556
I think you can argue this both ways but my point was that it is a distraction to suggest that there is necessarily some unethical starting position involved in NICE's decision.

I think the way NICE are handling issues with a lack of transparency is unethical. They are basically saying patients are too violent to be allowed to know who is assessing data. They are avoiding scrutiny through the mechanisms that parliament has set up.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Having read the full thread it seems that joshualevy is suggesting that some adequate studies with objective outcomes have come out.

None of the studies with comparison groups have shown any worthwhile findings at long term followup.

(the following is a summary from my notes, if anything notable is left out, I'd appreciate if someone would mention it!)

But short term findings have been noted in Graded Exercise Therapy trials:

PACE trial (Oxford criteria) showed a trivial mean increase in distance walked on the 6 minute walking test for graded exercise therapy (67m increase to 379m) (I have discussed elsewhere why I think the 6WMD is unreliable). The PACE trial found no difference in fitness as measured by the step test.

The following pilot studies of GET:

Moss Morris (1994 CDC criteria) 2005, baseline to post intervention mean VO2Peak dropped from 31.99 to 27.21 (ml/kg/min) in the exercise group, 31.02 to 25.08 (ml/kg/min) in the standard medical care group.

Wallman (1994 CDC criteria) 2004, baseline to post intervention found an increase in mean VO2Peak, 15.6 to 17.1 (ml/kg/min) in the exercise group, 15.8 to 14.4 (ml/kg/min) in the 'relaxation' group. (note that the RER and blood lactate was higher, reflecting that the patients in the exercise group simply worked harder on the test due to higher motivation).
Wallman also found an improvement on the 95 questions Stroop test, however this was not significant for the 83 question version of the test (curious!?!).

Fulcher & White (Oxford Criteria) 1998 (baseline to post intervention) found an increase in mean VO2Peak 31.8 to 35.8 (ml/kg/min) in the exercise group and 28.2 to 29.8 (ml/kg/min) in the 'flexibility' group. (both groups had increased blood lactate reflecting that they both worked harder)

Morriss & Wearden 1998 (Oxford criteria) which compared various combinations of Fluoxetine (double blinded) & exercise, found modest increases in VO2 Peak, but final results were still lower than the non-exercise & placebo group! (all of the following values are (ml/kg/min)):
Baseline results: Exercise+Fluoxetine 23.1, Exercise+Placebo 19.9, appointments+Fluoxetine 22.7, appointments+Placebo 26.0
Post intervention: Exercise+Fluoxetine 25.1, Exercise+Placebo 22.7, appointments+Fluoxetine 20.9, appointments+Placebo 25.9
This study also noted high dropout rates in the exercise groups, suggesting bias.

__________________________________________________________

The variable results on fitness/exercise testing (and the given RERs, peak heart rates etc) suggest biases in motivation and thus are not reflections of true performance, nor reduction or increase in exercise capacity. Exercise therapy trials also have substantial participation biases - only those who are motivated and capable of exercising choose participate and avoid dropping out.

Also note that the Wallman study (from Australia) described the therapy as "Graded exercise with Pacing".
Wallman 2004 said:
Subjects were instructed to exercise every second day, unless they had a
relapse. If this occurred, or if symptoms became worse, the next exercise session was shortened or cancelled. Subsequent exercise sessions were reduced to a length that the subject felt was manageable. This form of exercise, which allows for flexibility in exercise routines, is known as pacing.

The Dutch groups performed two meta analyses and found no difference in activity levels and neuropsychological testing across the Dutch CBT trials. (Wiborg 2010, Goedendorp 2013)

In terms of employment outcomes at long term followups, inconsistent reporting (eg not reporting the same measurement at baseline and followup) was common. In those reporting consistent outcomes at long term followup, almost all did not report improvements in employment outcomes between groups. (Huibers et al., PACE Trial, Deale et al., Bazelmans et al., Sharpe et al., Akagi et al. and special note of therapy in practise: Belgian clinical audit)
 
Last edited:

Valentijn

Senior Member
Messages
15,786
Moss Morris (1994 CDC criteria) 2005, baseline to post intervention mean VO2Peak dropped from 31.99 to 27.21 (ml/kg/min) in the exercise group, 31.02 to 25.08 (ml/kg/min) in the standard medical care group.

Fulcher & White (Oxford) 1998 (baseline to post intervention) found an increase in mean VO2Peak 31.8 to 35.8 (ml/kg/min) in the exercise group and 28.2 to 29.8 (ml/kg/min) in the 'flexibility' group. (both groups had increased blood lactate reflecting that they both worked harder)

Note that CPET scores in the above studies were normal at baseline, meaning they were not deconditioned in the slightest. These were likely physically healthy subjects, and the Fulcher study featured a lot of patients with psychiatric diagnoses and who were taking psychiatric medications during the trial. Hence the increase in scores was likely reflective of physically healthy people (not ME/CFS patients) doing an exercise program.
 

Sean

Senior Member
Messages
7,378
PACE trial (Oxford criteria) showed a trivial mean increase in distance walked on the 6 minute walking test for graded exercise therapy (67m increase to 379m)
From memory (quote at your peril, supply your own references, and corrections welcome):

Subtracting the contribution from the SMC comparison arm = 35m gain that can be attributed to GET on the 6MWT.

This very modest result was only for one arm (GET), was the only positive result on all objective measures in PACE, and was not tested let alone confirmed at long-term follow-up (2.5 years).

The result did not reach PACE's own definition of clinically significant, and still left patients in that arm (working age range, average age 40) scoring very poorly in comparison to the healthy working age population, with a performance level below Class III heart failure and struggling to even match the average for the retired age population.

That was after a year of therapy. Yet studies on re-conditioning in other diseases rarely (if ever?) show more than a few weeks of exercise therapy are required to restore normal healthy levels of basic conditioning (once their underlying primary pathology is dealt with).

As PACE only supplied the group mean figures without the data plots we don't know if that very modest result is biased due to a small number of patients making substantial gains that are not generalisable to all patients (i.e. we don't know the median effect size). The lack of the data plots also means we don't know the correlation status between 6MWT and other outcomes, objective or subjective.

The 6MWT result could could be down to nothing more than chance, given 1) it is the only objective measure to report benefit, 2) only does so for one arm, 3) which has the highest rate of drop-outs for the 6MWT, 4) is a small effect size, 5) below clinical significance, 6) and other trials of GET (in various forms) have failed to report long-term benefit on any objective measure (nor any subjective measure either, I think).

All the above being further confounded by the over-inclusive and low-specificity patient selection criteria (Oxford, retired).

Add all that up, and by any possible real-world interpretation the 6MWT result for GET is of no practical benefit to patients, and offers no support for the psycho-behavioural causal model being tested by PACE, not even at the secondary level, let alone primary.

(Worth noting that the PACE authors have persistently downplayed or ignored their own objective results and the implications, explicitly including the 6MWT. Which begs the question of why they bothered using them in the first place?)
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I think there's a difference between a doctor applying bad guidelines without searching more evidence and a group of people writing the guidelines only checking abstracts.

I think it is likely to be a myth that just reading abstracts is the problem. You can tell that PACE provides no useful evidence from the abstract anyway. Abstracts are a pretty reliable guide to quality if you are familiar with how to judge them. Trawling for abstracts that suggest evidence would be entirely reasonable. If the abstract looked to provide evidence but you wanted to check detail then it would be fair to expect to read the whole paper but in this case there is no need.

The problem seems to be a different one and that is that the people making the assessment are not competent to do so. That is pretty standard at NICE in my experience but I am still not convinced that the people making the decisions are necessarily acting unethically. An incompetent person given a task that they do badly is not necessarily acting unethically. For sure there looks to be an ethical problem somewhere in the system, and I am sure there is, but I am not convinced that it is going to be helpful to point that out to the people on the ground at NICE, particularly if it is couched in terms of rights to biological treatments that do not actually exist.

I certainly agree that there is a lack of transparency that appears unethical.
 
Messages
80
The problem seems to be a different one and that is that the people making the assessment are not competent to do so. That is pretty standard at NICE in my experience but I am still not convinced that the people making the decisions are necessarily acting unethically. An incompetent person given a task that they do badly is not necessarily acting unethically.

I think this right here is the unethical part. If you have to make a decision affecting the health of a lot of people in (potentially very) dire situations and you do not bring in someone with the necessary expertise or say 'I cannot in good conscience make an informed decision here' (to your boss, if necessary), then your actions are very dubious from an ethical standpoint.

If I applied to be a brainsurgeon tomorrow, even if I had the necessary credentials, and botched every single assignment because my hands shake a lot when under pressure, I should not have taken that job or at the very least realize that I am incapable of doing what is asked of me before I endanger someone. I realize that this is not how government agencies go about their business a lot of the time, but that does not make it any less wrong - it makes matters worse, if anything.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I think this right here is the unethical part. If you have to make a decision affecting the health of a lot of people in (potentially very) dire situations and you do not bring in someone with the necessary expertise or say 'I cannot in good conscience make an informed decision here' (to your boss, if necessary), then your actions are very dubious from an ethical standpoint.

But if the person concerned does not have insight into their incompetence - which presumably applies here - then they cannot be expected to call someone else in. They think they have already called in the right people - who say the evidence for CBT is good. People are remarkably dumb and lacking in insight very often. I suspect nobody at NICE quite realises that they have double standards - one for drugs and one for therapists that is very different.
 

A.B.

Senior Member
Messages
3,780
...good point. I guess they do not have the instant feedback a neurosurgeon would have either.

They do, but there has been a campaign to discredit the critics. First by claiming that patients are delusional and suffer from distorted perception of reality, second by claiming that PACE critics are violent and irrational. The Wesselites prepared this well in advance.
 
Last edited:

Valentijn

Senior Member
Messages
15,786
(Worth noting that the PACE authors have persistently downplayed or ignored their own objective results and the implications, explicitly including the 6MWT. Which begs the question of why they bothered using them in the first place?)
Because they were forced to put together a decent protocol to sell the trial to patients and at least some of the funders. They dealt with the expected null objective outcomes later by gutting them. Perhaps funding should be dependent on a contractual protocol, where quacks have to give the money back if they deviate from it :nerd: It really is a bait-and-switch, and suggests there was likely dishonesty involved when they presented the original protocol.

An incompetent person given a task that they do badly is not necessarily acting unethically.
Even (especially!) if they don't know they're incompetent, they still get punished with removal of their license to practice, or other restrictions and requirements.
 

Barry53

Senior Member
Messages
2,391
Location
UK
PACE trial (Oxford criteria) showed a trivial mean increase in distance walked on the 6 minute walking test for graded exercise therapy (67m increase to 379m) (I have discussed elsewhere why I think the 6WMD is unreliable). The PACE trial found no difference in fitness as measured by the step test.
There is another issue here I wonder about. Across all arms of the PACE trial there were a lot of dropouts from the 6mwt, and we have no available data clarifying why. Various possibilities come to mind:-
  • People may have felt unable to do the final 6mwt.
  • Investigators may have been "less than encouraging" to people they thought might produce poor results. Not an option I would once have even thought of, but now for PACE seems a worryingly non-trivial possibility.
  • People may have felt too demotivated to do it.
  • Other things I haven't thought of?
My concern is that some (maybe many) of these dropout participants might effectively constitute "negative improvement" values in the 6mwt data. If you sum a set of numbers but omit some of the negative values, then you would clearly bias the result upwards. I imagine this is a potential issue with many trials, and strategies must have been developed to manage such bias, but what was the situation with PACE?

Being as there were many dropouts from all the trial arms for the 6mwt, it is possible there would still be a minor positive GET effect compared to the other arms. But it is also possible the unbiased result would be less favourable to GET.

I appreciate you cannot use results that were not collected, possibly for unavoidable reasons. But that still doesn't mean any potential bias from uncollected results should not be addressed.

To clarify: I don't actually know if PACE factored these dropouts into the calculations correctly or not, but I'd be interested to here the opinions of people better qualified than I am. And does the reanalysis fully account for it?
 
Last edited: