Those things should be in a museum, not the loft
I've still got an IBM AT with a 5 1/4 inch floppy disk drives in a cupboard. Should that be in a museum as well?
Welcome to Phoenix Rising!
Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.
To become a member, simply click the Register button at the top right.
Those things should be in a museum, not the loft
Sounds like your cupboard is a museum!I've still got an IBM AT with a 5 1/4 inch floppy disk drives in a cupboard. Should that be in a museum as well?
There is a fluctuation issue as you say but also could someone walk two days in a row or will they need a month to recover from a walking test. Which is probably why the 6mwt was not done at the end by roughly a quarter of all patients
Those things should be in a museum, not the loft
I read and very much enjoyed this article by Jonathan Edwards - thank you for writing it
It is funny to see how the methodology of some psychology research like this wilts under the proper scientific scrutiny used by eg immunologists...
I came on here to ask a question to Jonathan if he is still around...(or anyone who wants to let me know)
The PACE trial takes its conclusions from the self assessed subjective outcomes, and the fact that it is unblinded is the subject of concern in this piece. However, can't more be said of the fact that they did indeed try to obtain objective outcomes - and they were as far as we can see a failure? The way I understand it, given they did try to take objective measures, then mostly abandoned them, says a lot about the overall success of the trial and adds further suspicion of the validity of the subjective results?
Yes, I think the consensus is very much in agreement with you. Tom Kindlon has made this point strongly. There are lots of other criticisms of the trial. I focus on the issue of blinding and subjectivity because it more or less sweeps everything else into the dustpan from the start. But if one concedes the trial is still worth examining then these other points come into play.
The only counter to this is that the recruitment methodology is so poor that one can argue that the trial cannot even give us a negative answer because it may not have recruited a representative cohort. If anyone who gets worse after exercise refused to volunteer, as might be expected, then the whole thing becomes meaningless. It is not even a study of ME/CFS.
What if its double blinded dosage? Neither the therapist nor the patient knows what dose they are getting. Under these conditions a dose response curve would be very telling.I'm not sure how a dose response curve could allow one to distinguish between a placebo effect and a real effect on the illness [looking at it from a CFS angle].
It feels to me that if the PACE methodology were being forensically analysed, say for educational purposes, then it might be highly beneficial to break the methodology down into all its component parts, and then the validity of each component quantified/qualified as best as possible. I can imagine that a final value for overall confidence might perhaps then drop out (at a naive guess) as the multiplication of all those various numbers arrived at for each component - PACE I'm sure would be infinitesimal. But I suspect it would be very educational to see the various contributions made by each component, to the overall number. Indeed it might well benefit some currently practising researchers of the BSP kind. Maybe genuinely educate some other well-intentioned but misguided researchers also. Maybe really bring home where all the weak spots are, and that in many ways it is a chain of confidences, needing only one weak link to bring it all crashing down.There are lots of other criticisms of the trial. I focus on the issue of blinding and subjectivity because it more or less sweeps everything else into the dustpan from the start. But if one concedes the trial is still worth examining then these other points come into play.
If anyone who gets worse after exercise refused to volunteer, as might be expected, then the whole thing becomes meaningless. It is not even a study of ME/CFS.
Is there a way for us to thank this man John Edwards? I have a cousin in London suffering terribly from this situation
He has been posting in this thread, so I think you just have! Welcome to the forum.
It feels to me that if the PACE methodology were being forensically analysed, say for educational purposes, then it might be highly beneficial to break the methodology down into all its component parts, and then the validity of each component quantified/qualified as best as possible. I can imagine that a final value for overall confidence might perhaps then drop out (at a naive guess) as the multiplication of all those various numbers arrived at for each component - PACE I'm sure would be infinitesimal. But I suspect it would be very educational to see the various contributions made by each component, to the overall number. Indeed it might well benefit some currently practising researchers of the BSP kind. Maybe genuinely educate some other well-intentioned but misguided researchers also. Maybe really bring home where all the weak spots are, and that in many ways it is a chain of confidences, needing only one weak link to bring it all crashing down.
I do agree with you entirely, and you make me realise my post did not get across - at all - what I meant it to. I believe strongly that there should, and could, be a very powerful but much simplified presentation of the various facets (components I called them) within the methodology of a clinical trial. Moreover, demonstrating how the confidence level of each component, influences and contributes to the overall confidence of the trial as a whole. At this level such a presentation would be digestible by most people, probably being a picture of various blobs (methodology facets) chained/networked together with their individual confidence levels, arriving at a final overall confidence level at the end. The output, the presentation, would be very simple to understand - that being it's whole objective. But the underlying work needed to arrive at such a presentation might be deceptively demanding and detailed; deriving a confidence level for each aspect of a trial methodology, and how to combine them, may not be at all trivial.If you want to convince people, this detailed "forensic analysis" is a waste of time. This is why they've gotten away with their questionable research practises for so long - few people care about the details. The key is to focus on the big picture, the strongest argument and the impact this has. This is why the commentaries of Edwards and Shepherd were important - Edwards emphasising the high likelihood of bias when relying on subjective outcomes without blinding and Shepherd pointing the impact that questionable research practises have on patients - loss of trust.
(disclaimer - these are the two main issues I have been talking about for years so naturally I'm pleased to see such issues brought up in the commentaries - confirmation bias ).
I imagine that there are many papers and trials for many illnesses that present with complicated symptom lists , some of which that are capable of being spun with a psychological angle, that also suffer from this problem. If perceptions are to be changed, and people are to realise that there are no clothes on these emporers, perhaps the net needs to be cast wider to encompass other groups that by their vulnerability share the same issues.Has anyone got access to a copy of Simon Wessely and Brian Everitt's 'Clinical Trials in Psychiatry'.
I refuse to waste £80 on it, and being housebound don't have access to a University library.
I'd be interested to know whether they point out the unscientific nature of unblinded trials with subjective outcome measures. (or any of the other flaws like conflicts of interest, changing recovery criteria etc)
If so, Wessely himself has condemned PACE before it started.
If not, he clearly doesn't understand science. This would help to explain why so many appallingly bad papers are published about ME by psychiatrists and psychologists, and also explain the well known crisis of replicability of psychological research.
It' s cognitive dissonance - all pervasive problem when you have to adjust constructs.The more I read, the more I think that they are just clueless. All I hear is "we did what everyone else did", and "we went through this and that committee", and "we went through peer review" – "and everyone said it was okay – SO WHY ARE YOU SAYING ITS NOT GOOD ENOUGH???" I think they really believe they did a marvellous job on the PACE trial, and genuinely do not understand all the criticism thrown at them.