• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Jonathan Edwards: PACE team response shows a disregard for the principles of science

BruceInOz

Senior Member
Messages
172
Location
Tasmania
The physicist Richard Feynman once said

"We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress."

and

"The first principle is that you must not fool yourself and you are the easiest person to fool."

In other words, to do science well you need to always be looking to see if your data will support the contrary or alternative view to your favoured one because you may be wrong. Normal human frailty makes this very hard to do but it is essential for good science.

The problem with the BPS crew is they are very wedded to their beliefs and are unable to even contemplate that there may be an alternative explanation. This has led them to not question if their measures are reliable or if they can better eliminate the biases of unblinded + subjective. They have fooled themselves.
 

lilpink

Senior Member
Messages
988
Location
UK
There is a fluctuation issue as you say but also could someone walk two days in a row or will they need a month to recover from a walking test. Which is probably why the 6mwt was not done at the end by roughly a quarter of all patients

Wouldn't VO2 max score simply be the easiest? It would omit patients who were unable to perform it of course but it would be properly quantifiable, especially for the type of patients co-opted to PACE who were ambulant enough to attend the various centres.
 

AndyPR

Senior Member
Messages
2,516
Location
Guiding the lifeboats to safer waters.
Funnily enough this popped up on my Facebook feed and for some reason I thought it was appropriate here... :)
17760952_1603503626344743_3327971490997352180_o.jpg
 
Messages
73
I read and very much enjoyed this article by Jonathan Edwards - thank you for writing it

It is funny to see how the methodology of some psychology research like this wilts under the proper scientific scrutiny used by eg immunologists...

I came on here to ask a question to Jonathan if he is still around...(or anyone who wants to let me know)

The PACE trial takes its conclusions from the self assessed subjective outcomes, and the fact that it is unblinded is the subject of concern in this piece. However, can't more be said of the fact that they did indeed try to obtain objective outcomes - and they were as far as we can see a failure? The way I understand it, given they did try to take objective measures, then mostly abandoned them, says a lot about the overall success of the trial and adds further suspicion of the validity of the subjective results?
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I read and very much enjoyed this article by Jonathan Edwards - thank you for writing it

It is funny to see how the methodology of some psychology research like this wilts under the proper scientific scrutiny used by eg immunologists...

I came on here to ask a question to Jonathan if he is still around...(or anyone who wants to let me know)

The PACE trial takes its conclusions from the self assessed subjective outcomes, and the fact that it is unblinded is the subject of concern in this piece. However, can't more be said of the fact that they did indeed try to obtain objective outcomes - and they were as far as we can see a failure? The way I understand it, given they did try to take objective measures, then mostly abandoned them, says a lot about the overall success of the trial and adds further suspicion of the validity of the subjective results?

Yes, I think the consensus is very much in agreement with you. Tom Kindlon has made this point strongly. There are lots of other criticisms of the trial. I focus on the issue of blinding and subjectivity because it more or less sweeps everything else into the dustpan from the start. But if one concedes the trial is still worth examining then these other points come into play.

The only counter to this is that the recruitment methodology is so poor that one can argue that the trial cannot even give us a negative answer because it may not have recruited a representative cohort. If anyone who gets worse after exercise refused to volunteer, as might be expected, then the whole thing becomes meaningless. It is not even a study of ME/CFS.
 
Messages
73
Yes, I think the consensus is very much in agreement with you. Tom Kindlon has made this point strongly. There are lots of other criticisms of the trial. I focus on the issue of blinding and subjectivity because it more or less sweeps everything else into the dustpan from the start. But if one concedes the trial is still worth examining then these other points come into play.

The only counter to this is that the recruitment methodology is so poor that one can argue that the trial cannot even give us a negative answer because it may not have recruited a representative cohort. If anyone who gets worse after exercise refused to volunteer, as might be expected, then the whole thing becomes meaningless. It is not even a study of ME/CFS.

thanks for the response - yes that makes sense. This flaw you mention is so fundamental. It seems like from a distance that some of these problems are accepted and ignored perhaps in psychological research...is it held to the same standards as other areas?
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I'm not sure how a dose response curve could allow one to distinguish between a placebo effect and a real effect on the illness [looking at it from a CFS angle].
What if its double blinded dosage? Neither the therapist nor the patient knows what dose they are getting. Under these conditions a dose response curve would be very telling.

I think not having dosage blinded could be an issue in interpreting data, even more so if the outcomes are subjective, and the therapy is also substantially subjective, as in talk therapy.

Not having objective outcome markers is a big issue. Its already too easy to mess around with data if you don't use a rigorous protocol, if you have a waffly talk based therapy then its hard to be sure you are seeing anything objective.

We actually need a bigger outcome measure pool for ME and CFS. I hope some of the current research under way might lead to those. Once we have enough then we can push for what is considered quality outcome measures, and against what are not. In other words we could push for a minimum methodological standard, and research that does not measure up is by definition substandard.
 
Last edited:

Barry53

Senior Member
Messages
2,391
Location
UK
There are lots of other criticisms of the trial. I focus on the issue of blinding and subjectivity because it more or less sweeps everything else into the dustpan from the start. But if one concedes the trial is still worth examining then these other points come into play.
It feels to me that if the PACE methodology were being forensically analysed, say for educational purposes, then it might be highly beneficial to break the methodology down into all its component parts, and then the validity of each component quantified/qualified as best as possible. I can imagine that a final value for overall confidence might perhaps then drop out (at a naive guess) as the multiplication of all those various numbers arrived at for each component - PACE I'm sure would be infinitesimal. But I suspect it would be very educational to see the various contributions made by each component, to the overall number. Indeed it might well benefit some currently practising researchers of the BSP kind. Maybe genuinely educate some other well-intentioned but misguided researchers also. Maybe really bring home where all the weak spots are, and that in many ways it is a chain of confidences, needing only one weak link to bring it all crashing down.
 
Messages
60
If anyone who gets worse after exercise refused to volunteer, as might be expected, then the whole thing becomes meaningless. It is not even a study of ME/CFS.

Yes, I would also suggest that anyone in the trial who did not get worse with GET did not have ME/CFS. If ME/CFS is defined as a condition that gets worse following exertion, then anyone whose condition is not worsened by repeatedly exerting themself cannot have ME/CFS by definition.

Again, that rendered the trial meaningless as a study of ME/CFS before it began, although I'm sure there is value in exposing all the methodological flaws and biases in order to show just how dreadful a piece of research it was in every way.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
It feels to me that if the PACE methodology were being forensically analysed, say for educational purposes, then it might be highly beneficial to break the methodology down into all its component parts, and then the validity of each component quantified/qualified as best as possible. I can imagine that a final value for overall confidence might perhaps then drop out (at a naive guess) as the multiplication of all those various numbers arrived at for each component - PACE I'm sure would be infinitesimal. But I suspect it would be very educational to see the various contributions made by each component, to the overall number. Indeed it might well benefit some currently practising researchers of the BSP kind. Maybe genuinely educate some other well-intentioned but misguided researchers also. Maybe really bring home where all the weak spots are, and that in many ways it is a chain of confidences, needing only one weak link to bring it all crashing down.

If you want to convince people, this detailed "forensic analysis" is a waste of time. This is why they've gotten away with their questionable research practises for so long - few people care about the details. The key is to focus on the big picture, the strongest argument and the consequences. This is why the commentaries of Edwards and Shepherd were important - Edwards emphasising the high likelihood of bias when relying on subjective outcomes without blinding and Shepherd pointing the impact that questionable research practises have on patients - loss of trust.

(disclaimer - these are the two main issues I have been talking about for years so naturally I'm pleased to see such issues brought up in the commentaries - confirmation bias :p).
 
Last edited:

Barry53

Senior Member
Messages
2,391
Location
UK
If you want to convince people, this detailed "forensic analysis" is a waste of time. This is why they've gotten away with their questionable research practises for so long - few people care about the details. The key is to focus on the big picture, the strongest argument and the impact this has. This is why the commentaries of Edwards and Shepherd were important - Edwards emphasising the high likelihood of bias when relying on subjective outcomes without blinding and Shepherd pointing the impact that questionable research practises have on patients - loss of trust.

(disclaimer - these are the two main issues I have been talking about for years so naturally I'm pleased to see such issues brought up in the commentaries - confirmation bias :p).
I do agree with you entirely, and you make me realise my post did not get across - at all - what I meant it to. I believe strongly that there should, and could, be a very powerful but much simplified presentation of the various facets (components I called them) within the methodology of a clinical trial. Moreover, demonstrating how the confidence level of each component, influences and contributes to the overall confidence of the trial as a whole. At this level such a presentation would be digestible by most people, probably being a picture of various blobs (methodology facets) chained/networked together with their individual confidence levels, arriving at a final overall confidence level at the end. The output, the presentation, would be very simple to understand - that being it's whole objective. But the underlying work needed to arrive at such a presentation might be deceptively demanding and detailed; deriving a confidence level for each aspect of a trial methodology, and how to combine them, may not be at all trivial.

Some example presentations could then be done. One for a good trial, where people could see how all the various confidence levels within each aspect of the trial, combined to give a good outcome confidence. Maybe another, which was mostly good, but having just one or two low-confidence aspects, and seeing how even that can pull down the whole confidence in a trial, even though the rest of the trial might have been pretty good. Big numbers still tend to drop dramatically when multiplied even once by a number close to zero - :).

The the same picture could then be painted for PACE, where so many aspects would be low/zilch confidence, and people could see how any one of them could crucify confidence in its final supposedly positive outcomes; how even the very low figures from the reanalysis would just effectively drop to zero, once this other overall trial-confidence factor had been applied. How it was not at all a mostly-good trial with some minor blip, but how in fact it used a disastrous methodology from start to finish.

My point about forensic analysis is that often, to get to a very simple and understandable explanation/presentation of something, can actually involve a lot of very painstaking and diligent background work. And for clarification (in case my earlier post confused on this point): I am highly aware and deeply appreciative of all the existing forensic-level analysis that has already been put into PACE since the FOI data was released, by all parties involved. My comments in these posts was referring to something from a different angle, as per above.
 
Last edited:

cigana

Senior Member
Messages
1,095
Location
UK
I have never looked too deeply into the PACE trial controversy, and I'm no expert in medicine, but I decided today to read Jonathan Edward's response, which was excellent.

It took me only a few minutes of reading to grasp the huge methodological flaw in PACE. I have to say I'm absolutely shocked that such a blindingly obvious flaw could possibly be inherent in any modern academic proposal, let alone make its way into any journal, let alone the Lancet.

If the response to a therapy is objectively measured, the therapy doesn't need to be blinded, because you're making objective measurements...
If the response to a therapy is subjectively measured, the therapy needs to be blinded to avoid potential bias introduced by the subjectivity.

You don't need to say anything else, it's a complete non-starter. The people who do not understand this simple concept are in charge of my health?

I'm just...shocked...
 

anni66

mum to ME daughter
Messages
563
Location
scotland
Has anyone got access to a copy of Simon Wessely and Brian Everitt's 'Clinical Trials in Psychiatry'.
I refuse to waste £80 on it, and being housebound don't have access to a University library.

I'd be interested to know whether they point out the unscientific nature of unblinded trials with subjective outcome measures. (or any of the other flaws like conflicts of interest, changing recovery criteria etc)

If so, Wessely himself has condemned PACE before it started.

If not, he clearly doesn't understand science. This would help to explain why so many appallingly bad papers are published about ME by psychiatrists and psychologists, and also explain the well known crisis of replicability of psychological research.
I imagine that there are many papers and trials for many illnesses that present with complicated symptom lists , some of which that are capable of being spun with a psychological angle, that also suffer from this problem. If perceptions are to be changed, and people are to realise that there are no clothes on these emporers, perhaps the net needs to be cast wider to encompass other groups that by their vulnerability share the same issues.
Proper science will unravel the chemical imbalances and feedback loops and triggers , a wider base may enable it to ramp up a gear or two. Psychiatry and psychology have many fingers in many pies -follow the money
 

anni66

mum to ME daughter
Messages
563
Location
scotland
The more I read, the more I think that they are just clueless. All I hear is "we did what everyone else did", and "we went through this and that committee", and "we went through peer review" – "and everyone said it was okay – SO WHY ARE YOU SAYING ITS NOT GOOD ENOUGH???" I think they really believe they did a marvellous job on the PACE trial, and genuinely do not understand all the criticism thrown at them.
It' s cognitive dissonance - all pervasive problem when you have to adjust constructs.