• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Jonathan Edwards: PACE team response shows a disregard for the principles of science

adreno

PR activist
Messages
4,841
The more I read, the more I think that they are just clueless. All I hear is "we did what everyone else did", and "we went through this and that committee", and "we went through peer review" – "and everyone said it was okay – SO WHY ARE YOU SAYING ITS NOT GOOD ENOUGH???" I think they really believe they did a marvellous job on the PACE trial, and genuinely do not understand all the criticism thrown at them.
 

Esther12

Senior Member
Messages
13,774
The more I read, the more I think that they are just clueless. All I hear is "we did what everyone else did", and "we went through this and that committee", and "we went through peer review" – "and everyone said it was okay – SO WHY ARE YOU SAYING ITS NOT GOOD ENOUGH???" I think they really believe they did a marvellous job on the PACE trial, and genuinely do not understand all the criticism thrown at them.

I think that some of them are that way. I'd be surprised if none of them were brighter than that.
 

Barry53

Senior Member
Messages
2,391
Location
UK
The more I read, the more I think that they are just clueless. All I hear is "we did what everyone else did", and "we went through this and that committee", and "we went through peer review" – "and everyone said it was okay – SO WHY ARE YOU SAYING ITS NOT GOOD ENOUGH???" I think they really believe they did a marvellous job on the PACE trial, and genuinely do not understand all the criticism thrown at them.
When people like that don't have anything of substance left to say, they just fall back to spouting emotive distractions, to try and obfuscate the fact ... they have nothing of substance to say. Politicians do it all the time - win people over emotionally, blinding them to the lack of factual substance.
 

Barry53

Senior Member
Messages
2,391
Location
UK
As someone unfamiliar to clinical trials, other than what I have gleaned since joining PR last year, there seems to me a fundamental disjoint between the psychiatric and biophysical approaches to them. To me as an engineer, the notion of running a trial that does not strive for objectivity, be it achieved directly or indirectly, seems bonkers. Reading @Jonathan Edwards' paper made me wonder if (psychiatric joking aside) there really could be a fundamental reason why psychiatry seems to have such an alien-seeming mind set to clinical trials.

As Jonathan points out, in a clinical trial you ideally want truly objective outcome measures, because then trial results themselves are going to be objective. No ambiguity. If however the outcome measures cannot themselves be objective, but are instead subjective, then measures are taken to try and null out the subjectivity as much as possible by blinding, and so - I think I am right here - strive for a fair approximation of objectivity in the end results, despite the subjective measures. So my point here is that no matter what the trial methodology is, the aim is for objectivity in the trial results, even if it involves having to null out some subjectivity where needed. Objectivity is king.

But if I try and put myself into a psychiatrist's shoes, investigating depression for instance ... well surely the overwhelming facets of depression are highly subjective. Yes I appreciate depression may eventually prove to have chemical/physical underpinnings, but if a depression patient comes into a psychiatrist's surgery one day and says they feel fantastic and life is great, and they continue to report that in the coming days/weeks/months/years, then they could reasonably be deemed to be recovered. Yet the outcome measures the patient is reporting are highly subjective, the actual symptoms of depression being in many ways highly subjective. (And before anyone says, a close relative of mine suffered from very severe depression, so I do have some insights).

So I cannot help wondering if this leads psychiatrists to come at things with a completely different mind set? Is it their norm to treat subjective measures as "hard evidence" in their world, and thereby find it so difficult to comprehend why others think differently? It feels like two worlds colliding, and I am just trying to understand if there might be more to it than we might think?

Please do not misunderstand me. I cannot abide the sloppy methodology of PACE at all. But I do think we have a duty to try and understand why things may be, because how else can we truly make things better in the future otherwise? A tutor of mine was once asked what he thought was the most difficult part of solving real world problems, and I always remember (and confirm) his answer: "The most difficult part of solving any problem is, invariably, trying to properly understand the problem in the first place". So I think we must explore all possibilities in our efforts to understand.

Is it that psychiatry is so immersed/contented in its world of subjective outcome measures when dealing with highly subjective conditions, that they lose the plot and simply cannot see that their rationale is not always valid?

Does it mean that in future, where conditions are disputed as being psychiatric or biophysical in nature, there must be strong safeguards to ensure that psychiatric trialling methodology is not applied unconditonally?
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Is it that psychiatry is so immersed/contented in its world of subjective outcome measures when dealing with highly subjective conditions, that they lose the plot and simply cannot see that their rationale is not always valid?
Its worse than that. There is a long history of researchers getting grants, prestige and promotions based on these methods. Its an entire industry that has embraced poor science. It occurs with the DSM as well ... how many diagnoses can be objectively diagnosed? It seems that as soon as something has a hard biomarker it goes to another specialty, which is often neurology.

The issue here is that psychiatry is claiming symptoms. Their diagnoses are based on symptoms. There is very little in the way of hard biomedical findings, though there are attempts to get there, such as with depression. Subgroup analysis is often lacking.

Sadly a lot of CFS and even ME research has the same problems, but those researchers rarely make overblown claims (rarely, not never), and are basing their decisions to a large extent on objective biomedical findings, not unproven theory.

Psychiatry needs to embrace modern scientific methodology. That is when we see the issues though, in trying to do that you find:

1 A paucity of good medical technology to identify issues in the brain.
2 The diagnostic categories being researched are unstable and highly inclusive - you can never be sure what group you are studying.
3 Many studies cannot be blinded.
4 The use of subjective outcomes has become normalized, the usual caveats you see in sociology for example are rarely mentioned in psychiatric research. Sociology is a better match, they should be taking their lead from that, not pretending its on the same footing as, for example, physics.

Not all psychiatric research is bad or even poor, but its so common its considered normal. The rewarding of this kind of science reinforces it, as do issues with peer review and funding, and outside financial rewards such as from insurers. When it aligns with current political ideology it can be particularly pernicious.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
https://www.facebook.com/david.tuller.1/posts/10154194468391829?pnref=story

In plain English: Dr. White knew that the method they used to create the "normal ranges" for fatigue and physical function yielded distorted "normal ranges" that included many more people than a standard "normal range." He and his colleagues warned about this in their 2007 paper. However, no such warning was contained in the PACE papers. In other words, they understood that the method they used yielded distorted "normal ranges" yet chose not to mention this when writing up their PACE results. This is very, very deceptive research behavior.

... it would be very, very difficult to omit something like that unintentionally. They have known all along that their "normal ranges" were no such thing.
 

Snowdrop

Rebel without a biscuit
Messages
2,933
@Barry53

I understand your thinking. Come at a problem by trying to solve the problem.
Unfortunately I believe that this problem of understanding how they could have been so sloppy is not amenable to solving because we are being rational and the motivations are political IMO.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
The issues with psychogenic research, including PACE, are not about rational choice. They are indeed a better fit for cultural, ideological, political and cult-like thinking. So anyone trying to explain it using just rational thinking can get befuddled by the whole thing. Its taken me a long time to learn this, and I am not convinced I am quite there yet.

You see this most clearly in economics when many of the theories are based on the notion of rational consumers and investors, when the evidence shows they behave far from rationally. However the issues here include limitations in how the brain works. I think that is an unexplored area for why psychobabble is persuasive.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
...
Is it that psychiatry is so immersed/contented in its world of subjective outcome measures when dealing with highly subjective conditions, that they lose the plot and simply cannot see that their rationale is not always valid?

Does it mean that in future, where conditions are disputed as being psychiatric or biophysical in nature, there must be strong safeguards to ensure that psychiatric trialling methodology is not applied unconditonally?

Dear Barry,
Your post is an excellent summary of how a person from another field should get to grips with the central problem of PACE. This is what I am trying to point out to Wessely and he is pretending not to follow.

We are all agreed that the key outcomes in many diseases are genuinely subjective. Mood in depression is an obvious one. Fatigue (or whatever it really is) for ME is another. And the other big one is pain. Rheumatologists like me spend their lives handling mostly pain, so the most valid endpoint in rheumatology is just as subjective as in psychiatry and we acknowledge that.

The difference is that around 1980 all rheumatologist came to realise that where one was using subjective outcomes as primary endpoint unblinded trials were useless. We had lots of physiotherapists claiming they made people better on subjective outcomes, but all that was happening was that the patients were being kind saying they were better.

And there have been double blind trials of drug and other physical modalities in psychiatry with subjective outcome measures that are perfectly satisfactory. A lot have given negative results but some have established valid treatments.

So the real difference is that psychiatry still relies very heavily on unblindable treatments i.e. psychotherapy and does not want to face up to the fact that if it wants to study conditions where the right endpoint is subjective then some clever methodology is needed to stand in for blinding. There are ways of doing this but they are complicated.

It is very clear to me that the psychiatry/clinical psychology world has not faced up to this problem. For reasons I will not elaborate, relating to my article, it became clear that even those who might criticise PACE from within psychology would prefer not to admit that 'not good enough is just not good enough' when it comes to psychological trials.

As has been discussed by PR members on a previous thread there may actually be something psychiatrists can learn directly from rheumatology in terms of outcome measures. In rheumatoid arthritis (RA) the American College of Rheumatology devised a very clever system of scoring that gives one nearly the best of both worlds.

To show a grade of improvement, say ACR20, the patient had to be 20% better on the subjective measures that really matter to them. However, they also have to be 20% better on at least some other more objective measures - to confirm that the 20% on the first measures is backed up by what you would expect. So in ME it would be reasonable to require a 20% improvement in fatigue and a 20% reduction in walking time or 20% increase in antimatter readings. These scoring systems are not the sort of bogus pseudo numerical scores you get with things like SF36 where you add up unrelated marks. There is no arithmetic involved. You just have to pass a chosen threshold twice (or in fact five times for ACR20). The relevant maths is the maths of confidence in the validity of the result, not the pseudo maths of additive scoring.
 

trishrhymes

Senior Member
Messages
2,158
Warning: incoherent ramblings of a former mathematician... Feel free to ignore.

Edit: @Jonathan Edwards post above was being written while I was writing this. Read him in preference to my ramblings! Thank you @Jonathan Edwards. Admirably clear as ever.
.....

This situation reminds me a bit of studying maths, in particular I'm thinking of geometry. As a child I learned Euclidean geometry, where there are a certain set of axioms accepted as true, for example that parallel lines never meet. Then all the rest of the geometry could be proved using those axioms and deductive reasoning.

Then at University I studied other geometries like projective geometry where one axiom is changed, in this case parallel lines meet at a point at infinity (don't ask). This gave rise to a whole set of new and beautiful new theorems (think art with and without perspective).

Then there's topology, where distance doesn't matter, it's all about whether two points can be joined by an unbroken line along a surface (think Moebius band).

To a topologist, a teacup is the same as a donut because it is a solid shape with one hole.

The point is, completely different views of the world are developed completely logically using logic and deductive reasoning, with just a single axiom (assumption) changed.

In ME/CFS the psychiatrists/psychologists of the BPS school start with the axioms

BPS Axiom 1. that ME/CFS is a creation of the mind, possibly triggered by something physical, but perpetuated by false illness beliefs and incorrect behaviour (inactivity) causing deconditioning.

BPS Axiom 2. that questionnaire data is as real and reliable as biological data.

Everything they deduce in their research stems logically from these 2 axioms/assumptions.

In their world they are being perfectly logical when they conclude, using axiom 1, that a patient who is both fatigued and inactive the causal direction is inactivity causes fatigue.

Using axiom 2. they deduce that questionnaire data is not subjective, is measured on linear scales, fits a normal distribution, or can be treated as if it does, and is reliable, repeatable and not subject to influence, and can be analysed using the same statistical methods as linear biological/physical data.

The biological school starts from the axiom that there is an ongoing biological cause of fatigue and the other symptoms, and the causal direction is from biochemistry to fatigue to inactivity.

Each school of thought has its own internal logic. The problem for us is that the axioms on which the BPS school is built are false in the real world, however 'beautiful' it may be in their imaginary world.

They therefore treat their questionnaire data the same as linear physical data, use statistical tests that are inappropriate, see no need for blinding in trials, and make deductions that fit the internal logic, but are wrong because the axioms are false.

By applying these false axioms to the real world they are doing immeasurable harm, just as if we tried to apply topology to the real world and defined teacups as the same as donuts we might have a problem or two.

End of ramble.

Edit: I should have added the obvious chain of deduction that is so dangerous in BPS (starting point in this logical pathway depends on whether CBT or GET is used):

BPS model logical deductions:
Changed beliefs leads to increased activity leads to increased fitness and decreased fatigue leads to recovery.

Biological model, the logical deductions:
Changed beliefs leads to false confidence leads to increased activity leads to increased biochemical problems leads to increased fatigue and other symptoms (ie PEM) leads to relapse.

Each of these is a perfectly logical chain of reasoning. But one of them starts with a false axiom.
 
Last edited:

A.B.

Senior Member
Messages
3,780
BPS Axiom 2. that questionnaire data is as real and reliable as biological data.

I would say the axiom 2 is that improvement on subjective measures means that the patient is being helped in some way, even if there is no objective improvement.

Which seems to boil down to:

1. The patient is being helped even if they can't prove it (and we are asked to trust them).
2. The psychological dimension is separate from the body (if a set of psychological parameters can change without a change in a set of physical parameters then it implies that the psychological parameters are not controlling the physical ones, and they clearly are arguing that some real change, as opposed to placebo effect, is taking place at the psychological level).

Ironically they also like to talk about overcoming dualism and how the mind and emotions affect the body so they are clearly also arguing that the psychological dimension is not separate from the body. So which is it?
 
Last edited:

arewenearlythereyet

Senior Member
Messages
1,478
Food rambling this time. Please also ignore as you please.

For assessment of foods (which is highly subjective) we use rating scales and methodology that have been standardised and tested to eliminate as much subjective bias as possible. This includes the specific wording of the scale points, the number of points on the scale, the number of respondents needed to get a statistical significance, to in some cases, using odd number of points on the scale rather than even. These are standard across the food industry around the world. This allows you to set a hurdle or threshold that everyone understands and can relate to.

They are not perfect and if used inappropriately or with sloppy testing conditions can lead to misinterpretation but they don't need a lot of interpretation or statistical validation when used correctly.

I'm wondering whether such a scale could be devised for the various aspects of ME (fatigue, pain, etc) with valid pre-testing of the rating scale for bias like the food ones were?
 

A.B.

Senior Member
Messages
3,780
A decent fatigue scale would ask the patient how impaired they have been by fatigue in the last 7 days or so. Trying to get an absolute measure of fatigue seems hopeless.

PS: that plot of PACE data of self rated fatigue and distance walked that circulated some time ago showed quite literally no correlation between the two.
 
Last edited:

trishrhymes

Senior Member
Messages
2,158
I'd trust distance walked more than any questionnaire as a measure of illness/disability in ME/CFS. It's not perfect, but so much better than subjective measures.

A small warning for those using Fitbit or similar for pacing - which I'm doing and it's helped a lot. I wear mine on my right wrist because I'm left handed. It's therefore measuring movement of my right arm, but is a good proxy for steps most of the time.

Then I fractured my left shoulder and had to use my right hand a lot more. Suddenly I was reaching my day's limit at mid-day instead of bed-time! And I was significantly less active. I should have transferred it to my left wrist.

Testing for studies should use an ankle or waist location for an actometer. Also it's useless for measuring the amount of sleep I get, because I spend my evenings lying quietly in bed reading etc. It usually registers this as sleep!
 

user9876

Senior Member
Messages
4,556
So I cannot help wondering if this leads psychiatrists to come at things with a completely different mind set? Is it their norm to treat subjective measures as "hard evidence" in their world, and thereby find it so difficult to comprehend why others think differently? It feels like two worlds colliding, and I am just trying to understand if there might be more to it than we might think?

The way I think of it is that there is a quantity to be measured say 'depression' but that can not be directly measured and so there are proxies to measure that can be observed. But in having such proxies they are imperfect and subject to biases, measurement errors, non-linearity etc. So we need to understand these characteristics and how they can introduce measurement errors into an experiment.

There may be an issue in imagining there is such a thing as 'depression' or 'fatigue' that is a single measurable concept rather than a composite concept. The CFQ splits fatigue into mental and physical fatigue components and then combines them with unequal weighting.