• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Jonathan Edwards: PACE team response shows a disregard for the principles of science

user9876

Senior Member
Messages
4,556
In ME/CFS the psychiatrists/psychologists of the BPS school start with the axioms

I understand what you are saying but I don't think they have the formality of thought to lay out an argument in that way. So the believes are not axioms that they build on but just biases that creep into their thinking.

The nice thing about maths is that you can (ok given Godel may be able to) prove the theorem by building a sequence of steps each of which is demonstrated,

Reading things like Wessely's papers there are not small steps in reasoning but massive leaps and over generalizations (by this I mean things like statement X was tested over Y and hence the conclusion that there does not exist an A which is satisfied within the set B where X is a member of A and Y is a member of B). In other words its not just axioms but the whole reasoning system.
 
Messages
2,391
Location
UK
It is very clear to me that the psychiatry/clinical psychology world has not faced up to this problem. For reasons I will not elaborate, relating to my article, it became clear that even those who might criticise PACE from within psychology would prefer not to admit that 'not good enough is just not good enough' when it comes to psychological trials.
Quite so. The spurious argument that something must be right simply because it is the best there is (even when best is still rubbish, possibly unsafe), itself epitomises reliance on emotive language and downplaying of objectivity.

As has been discussed by PR members on a previous thread there may actually be something psychiatrists can learn directly from rheumatology in terms of outcome measures. In rheumatoid arthritis (RA) the American College of Rheumatology devised a very clever system of scoring that gives one nearly the best of both worlds.
Agreed. Genuine collaboration between different mind sets can often result in step changes of knowledge and understanding. But of course it cannot happen without the willingness of all parties. And as you also intimate, I think there would be a great fear of bringing their house of cards crashing down, being as so much of their clinical foundations could be built on sand.

These scoring systems are not the sort of bogus pseudo numerical scores you get with things like SF36 where you add up unrelated marks. There is no arithmetic involved. You just have to pass a chosen threshold twice (or in fact five times for ACR20). The relevant maths is the maths of confidence in the validity of the result, not the pseudo maths of additive scoring.
Interesting. Like an illusionist's trick. Adding things to get an impressive-looking result, where there is no real underlying additive relationship between them. The thing that is not being summed is confidence (or maybe effectively adding lots of zero-confidences).
 
Last edited:

ladycatlover

Senior Member
Messages
203
Location
Liverpool, UK
This may not be the place to put this, but when did that stop me? ;) :)

When I was still well enough to still be able to get to my (Part time, each level over 2 years) Applied Psychology course I had a work placement with HSE. It was really interesting, especially since the chap I was working with had a particular interest in ME, Fibro, MCS - he was an Occupational Psychologist, so was interested in effect of workplace on illness and vice versa.

During that time I was also working on a (very small, only 10 patients and 10 controls) piece of research on ME/CFS as part of the Level 2 course - I had intended to use it as pilot study for my dissertation, but gave up long before that. The Occ Psych suggested I looked at the Borg Fatigue Scale. I've just been looking at it again, and in fact it's the Borg Scale of Perceived Exertion. And on looking at it again I wonder if it might be a useful scale to use in ME/CFS research.

I'm ashamed to say I can't remember if I tried to use it or not. :eek: :oops: :rolleyes: My work now resides in a box in the loft somewhere, or on floppy discs in a box in the loft, so can't find it readily to check. Laptop I was using at the time failed massively after husband knocked a glass of gin and tonic over it! :mad: and I'm afraid I wasn't terribly good at doing back-ups back then, because everything important was on floppy discs. :rolleyes:

The advantage of the Borg Scale is it's very patient friendly - it's used to measure breathlessness in emergencies when patient can't really talk due to breathlessness for example. Anything has to be better than the Chalder Fatigue Scale!

https://en.wikipedia.org/wiki/Rating_of_perceived_exertion

https://web.archive.org/web/20080131172946/http://www2.psychology.su.se/staff/gbg/index.html

https://scholar.google.co.uk/scholar?q=gunnar borg fatigue&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwju4fXi2oXTAhUoD8AKHeNhCs8QgQMIGDAA
 

user9876

Senior Member
Messages
4,556
The advantage of the Borg Scale is it's very patient friendly - it's used to measure breathlessness in emergencies when patient can't really talk due to breathlessness for example. Anything has to be better than the Chalder Fatigue Scale!

https://en.wikipedia.org/wiki/Rating_of_perceived_exertion

https://web.archive.org/web/20080131172946/http://www2.psychology.su.se/staff/gbg/index.html

https://scholar.google.co.uk/scholar?q=gunnar borg fatigue&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwju4fXi2oXTAhUoD8AKHeNhCs8QgQMIGDAA

The Borg scale is one of the secondary outcomes of PACE. From the protocol:
http://bmcneurol.biomedcentral.com/articles/10.1186/1471-2377-7-6
10.
The Borg Scale of perceived physical exertion [44], to measure effort with exercise and completed immediately after the step test.

But I'm not sure if I have seen it published.

They kind of leaked out the step test (which is also a secondary outcome)
9. The self-paced step test of fitness [43].
in a graph in one paper but said it was vexatious to ask for the actual average values which were drawn on the graph. Of course when in reply to Keith's editorial they claim they met the CONSORT guidelines the non-reporting of secondary outcomes demonstrates they have not (and a graph is not sufficient). This is defined as point 17.a on the Consort check list (http://www.consort-statement.org/checklists/view/32-consort/111-outcomes-and-estimation)

[EDIT]
They seem to have removed these as secondary measures within the statistical analysis plan. Which just makes their plan look dodgy as if they had seen or got hints that the data was not good. So I guess they would claim to be compliant with Consort as they dropped these from the stats plan.
 
Last edited:

RogerBlack

Senior Member
Messages
902
At some level, the 'proper' way to treat disease is to first fully determine the biological cause and process of the illness, and then develop some intervention which will certainly reverse this.

In the case of parasites, this is easy - you remove the parasite, whether by surgery, drugs or sprinkling salt on it.

In many fields, the precise mechanism of action is not understood when a treatment is administered - for example, Jenners Variolation for smallpox had no sane provable mechanism of action, and was only really provable sometime shortly after 1950.

Statistics however were a simple, powerful, and inarguable proof - a tiny fraction of people died compared to the untreated group.

https://soundcloud.com/bmjpodcasts/...cognising-depersonalisation-and-derealisation was a good recent podcast that illustrates some of the problems with applying this approach to the brain.

It discusses a condition where though you recognise that it is in fact you doing things, it doesn't feel like you.

The right measure for that at this time is subjective, and there is pretty much no alternative.

It's quite plausible that a full understanding of this condition will not occur even when we can fully and deeply simulate an entire brain.

Ask the patients what their primary problem is.

Asking them how much it's affecting them, and trying to come up with questionaires asking how they feel about it is a good step.

A graph from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3154208/ was recently posted on asthma and placebos - where subjective improvements with sham and placebo treatment were equal to the real drug, but objective improvement for these were zero.

This shows the problem with only using questionaires about feelings about your condition - it needs to be anchored in reality, or simple questions.
Not 'how do you feel about your maths ability' but 'what is 37*2'. Not 'what tasks can you do', but 'what did you do yesterday'. Not 'do you feel fatigue makes other people like you less'...

This is hard.
Arthritis was brought up earlier, and has lessons. Clinicians assessment of x-ray/MRI/ultrasound images of joints might be seen as the gold standard.
At least in some cases as I understand it though, apparent condition of the joints does not map well to functional changes.

I think it comes down to that you've got to ask the patient what's bothering them, how it's affecting their life, and then come up with measures that correlate well to measures of how it's affecting them.

If they say they can't walk far, that's easy to test.
 

adreno

PR activist
Messages
4,841
So the real difference is that psychiatry still relies very heavily on unblindable treatments i.e. psychotherapy and does not want to face up to the fact that if it wants to study conditions where the right endpoint is subjective then some clever methodology is needed to stand in for blinding. There are ways of doing this but they are complicated.
So are there any valid ways to test psychotherapy interventions? Could PACE have been setup properly? I suppose neither the subjective outcomes nor the unblinding could be avoided.
 
Last edited:

RogerBlack

Senior Member
Messages
902
So are there any valid ways to test psychotherapy interventions? Could PACE have been setup properly? I suppose neither the subjective outcomes nor the unblinding could be avoided.

PACE could have been setup properly, even without objective measures like actiometers or stress tests, and at least answered many of the criticisms leveled at it.
(though it would have come up with a null result, so wouldn't have been criticised).

For example, with CFS, you start out with a sane measure of disability and reasonable well chosen limits for that.
Explore health and activities at the start of the intervention, provide similar positive (or neutral (your particiapation is important) messages about the trial arm the patients are in.
Ask about employment and help they get, and activities they are unable to do at the beginning, end, and a year after, along with scales like SF36 and friends.

Perhaps even ask 'If I was to give you 1000 pounds to run a marathon, in 6 weeks, what would you do?'.

PACE could have been a good trial, even with only carefully selected self reported outcomes.
edit: 'only properly selected self-reported' ...
I'm sure the existing outcomes were very carefully selected.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
BPS Axiom 1. that ME/CFS is a creation of the mind, possibly triggered by something physical, but perpetuated by false illness beliefs and incorrect behaviour (inactivity) causing deconditioning.

BPS Axiom 2. that questionnaire data is as real and reliable as biological data.

Everything they deduce in their research stems logically from these 2 axioms/assumptions.
See my blog: http://forums.phoenixrising.me/index.php?entries/the-witch-the-python-the-siren-and-the-bunny.1149/
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Fatigue, pain and depression are the three big symptom clusters that are not really fit to be directly used for diagnosis by themselves. They can be part of a diagnostic criteria, but the other parts are just as important. All are so broad that you might as well be saying the patient has a symptom, without specifying, and diagnose them as Symptom Disorder. Identifying the underlying mechanisms and measuring those is critical.

All areas of medicine have problems. Yet to the extent we can use good scientific methodology we can limit or at least understand the limits of those problems. To do that you first have to use good methodology.

For example, I think Pain Disorder, Fatigue Disorder, and Depressive Disorder are just labels for clusters of very serious symptoms, but cannot be used to identify any specific diagnostic entity unless its a made up or inferred entity.

Depression in the DSM5 is a case in point. Nobody doubts the symptoms are serious and real, but the disorder is essentially fictitious, decided on by a group of people reaching partial or full consensus.

So when you want to study depression it makes very little sense to not subgroup according to all the various symptoms and findings, and try to identify a distinct disorder within the larger one. For a start you have to account for all the patients cured, and I mean cured not just having the depression alleviated, by anti-virals or correcting nutritional deficiencies and so on.

The way things are now, if cancer were not understood to be biological, and given as much attention in research, hypothetically we might find using BPS methods that the patients are given an antidepressant or psychotherapy, a painkiller, which might be upgraded to an opioid painkiller at some point, and told to either rest or exercise. This kind of problem may apply to every disorder or disease not currently understood and are treated as psychiatric.

There are of course caveats relating to neuropsychiatry, and other psychiatric hybrids, but then the focus of psychiatry is on managing but not curing behavioural symptoms, while the other discipline tries to deal with the cause.
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
So are there any valid ways to test psychotherapy interventions? Could PACE have been setup properly? I suppose neither the subjective outcomes nor the unblinding could be avoided.

Clinical pharmacologists put a lot of emphasis on dose response curves. We do not really believe a result with a drug unless we can show the difference between an inadequate dose, a just adequate dose and a dose that is more than adequate (no better than the adequate dose). You want all three results before you are confident.

The equivalent for psychotherapy can be done with dosing - for instance seeing if the number of sessions makes a difference. But that alone would not be adequate. More useful would be to break down the psychotherapy intervention into its components and see which had the effect. For instance you could get someone to video a series of sessions with a set of patients and extract what appear to be the important messages from the therapist. You then see if these messages, in a booklet form, discussed with other patients by someone with no psychotherapy training is as good as having someone 'trained' doing it. And so on. There was I think quite an interesting study comparing a UK tertiary centre (probably King's) with a Dutch centre and the Dutch centre seemed to get better results. So maybe PACE failed because the therapists weren't actually very good at CBT!

Validating therapies that cannot be blinded in this sort of way would be a long hard process but sometimes studies show important things up rather simply. If a sympathetic person with a booklet is as good as a therapist then things are much easier because there is no need to train therapists. If the sympathetic person without the booklet is also as good then clearly CBT has no specific value. Etc....

And as Roger Black says, PACE could have been a darn site better just by tidying up some simple issues.
 
Messages
2,391
Location
UK
We do not really believe a result with a drug unless we can show the difference between an inadequate dose, a just adequate dose and a dose that is more than adequate (no better than the adequate dose). You want all three results before you are confident.
This notion of bracketing is something I can readily identify with, and is a concept that is almost second nature in so many things, from the quite complex to the very simple. For many analogue adjustments (such as tuning a radio for instance), it is often more intuitive and gives higher confidence to tweak it just past where it seemed best, then back the other way similarly just past ideal, then set it in between; the bracketed values either side offering confirmation the setting is good.
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
This notion of bracketing is something I can readily identify with, and is a concept that is almost second nature in so many things, from the quite complex the very simple. For many analogue adjustments (such as tuning a radio for instance), it is often more intuitive and gives higher confidence to tweak it just past where it seemed best, then back the other way similarly just past ideal, then set it in between; the bracketed values either side offering confirmation the setting is good.

Yes, and at a deeper level it confirms that you are actually turning the right button! The music might have appeared because you were turning the volume button, not the tuning button. So a dose response curve is a way of checking that the cause and effect relationship you think you are observing is not some spurious artefact nothing to do with the drug.
 

A.B.

Senior Member
Messages
3,780
I'm not sure how a dose response curve could allow one to distinguish between a placebo effect and a real effect on the illness [looking at it from a CFS angle].

Maybe over time the positive spin and hopefulness will wear off despite receiving CBT, exposing a lack of effect. In my experience unrealistically positive spin and hopefulness can persist for a long time. Patients generally don't want to face the reality of not having a treatment.

It seems like there isn't really an alternative to using some objective marker of daily life functioning, or blinding.
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
I'm not sure how a dose response curve could allow one to distinguish between a placebo effect and a real effect on the illness [looking at it from a CFS angle].

Maybe over time the positive spin and hopefulness will wear off despite receiving CBT, exposing a lack of effect. In my experience unrealistically positive spin and hopefulness can persist for a long time. Patients generally don't want to face the reality of not having a treatment.

It seems like there isn't really an alternative to using some objective marker of daily life functioning, or blinding.

I agree. I think I said that a traditional dose response approach is unlikely to be the answer for unbindable therapies. But the principle of the dose response curve is that whereas it is very easy to get spurious effects from placebos and other such things these effects are pretty unlikely to show the shape of dose response curve that one would predict from knowing the mechanism of action of a drug. Drugs produce very predictable sigmoid curves that are hard to get with non-pharmacological effects. The same applies for an experiment on cells in the lab. Your theory should not only say X will work but that it will show a dose response of such and such a form. I would be prepared to bet that if a study was done using the number of therapy sessions as the 'dose' you would get a curve that looked worryingly like the patients were just ticking boxes to please the therapist. At least if you put your treatment to that test you can claim some sort of evidence for it not being purely a placebo effect.

Of course if like Dr Knoop you believe that CBT is a placebo anyway, then things get sticky!
 
Messages
2,391
Location
UK
I'd trust distance walked more than any questionnaire as a measure of illness/disability in ME/CFS. It's not perfect, but so much better than subjective measures.
I think it would need to be distance walked factored in with speed, given how ME affects people so differently. My wife can walk sometimes distances that I suspect exceed what many people in PR can manage, but she is only able to do it oh-so-very slowly ... and it varies. Some days she can walk maybe a mile or two, with lots of little stops along the way (taking photographs, which makes for a more natural "workflow"), walking at 1 or maybe 2 mph. At other times, like now, it is a few hundred yards, at probably less than 1 mph.
 

user9876

Senior Member
Messages
4,556
I think it would need to be distance walked factored in with speed, given how ME affects people so differently. My wife can walk sometimes distances that I suspect exceed what many people in PR can manage, but she is only able to do it oh-so-very slowly ... and it varies. Some days she can walk maybe a mile or two, with lots of little stops along the way (taking photographs, which makes for a more natural "workflow"), walking at 1 or maybe 2 mph. At other times, like now, it is a few hundred yards, at probably less than 1 mph.

There is a fluctuation issue as you say but also could someone walk two days in a row or will they need a month to recover from a walking test. Which is probably why the 6mwt was not done at the end by roughly a quarter of all patients.
 
Messages
2,391
Location
UK
Yes, and at a deeper level it confirms that you are actually turning the right button! The music might have appeared because you were turning the volume button, not the tuning button. So a dose response curve is a way of checking that the cause and effect relationship you think you are observing is not some spurious artefact nothing to do with the drug.
I do like that, because it is so so true. What in engineering (and maybe other disciplines) we would call a sanity check.
 
Last edited:

JohnCB

Immoderate
Messages
351
Location
England
@trishrhymes . It seems to me that that the real difference between the BPS axioms in your analogy and the mathematical axioms and also physical laws is the test of time. Mathematical axioms and physical laws are not proven in a formal sense. They are accepted because they do work. They have been tested over time. The mathematical axioms have been tested to destruction over the millenia since Archimedes and his chums were scratching triangles on their slates and the physical laws have resisted falsification over centuries since Kepler and Newton were busy.

These rules have been really hammered over time and tested to destruction. Sadly the BPS crew have taken a set of assumptions and hardly tested these assumptions at all.
 

CFS_for_19_years

Hoarder of biscuits
Messages
2,396
Location
USA
I think it would need to be distance walked factored in with speed, given how ME affects people so differently. My wife can walk sometimes distances that I suspect exceed what many people in PR can manage, but she is only able to do it oh-so-very slowly ... and it varies. Some days she can walk maybe a mile or two, with lots of little stops along the way (taking photographs, which makes for a more natural "workflow"), walking at 1 or maybe 2 mph. At other times, like now, it is a few hundred yards, at probably less than 1 mph.
The 6-minute Walk Test is administered with the directions "Walk as far and as fast as you can in 6 minutes." It's a standardized test that was used in PACE.