There's certainly a lot in this, but I think it''s overstated. Specifically, I think it the PACE trial was capable of showing CBT or GET worked - had they actually been effective - if it had been set up and in particular interpreted properly - with an example below of how that could look.
OK, so the trial data that's been published so far shows PACE CBT/GET did not work, but I don't think that is a meanginless or uninterpretable finding. In fact, I think PACE is good evidence that CBT/GET aren't much use in mecfs.
Certainly it's much harder to set up and interpret a study using self-reports in unblinded studies, which applies to much psychological research. But I don't think its by any means impossible, if researchers are aware of the pitfalls and interpret results accordingly.
@Jonathan Edwards, I don't agree that there are no good methods for conducting behavioural interventions. There are, but they need good control arms (a "dummy" therapy that's promoted as heavily as the one of interest), and they need a range of measures, not just self-report. Still, appreciate your support on the broader issues here!
I hear what you are both saying, Simon and Woolie, and constructive debate could get us further on this I suspect, but I pretty much stand my ground.
For PACE to show that CBT works I think either the controls would have had to be different (rather as Woolie suggests and I will come back to that) or the primary endpoint would have had to be more sophisticated. As we have discussed before I think they should have used something like the ACR grading for rheumatoid, where you need to satisfy both a subjective and an objective criterion to get a score. There are other options with several 'primary' endpoint options with a built in Bonferoni adjustment if you want to say that satisfying any of them would be worthwhile. I am not sure what the original primary endpoint was. If it was purely objective that would have done, but I don't think it was.
The PACE authors will have known that CBT is not magic, from prior experience. They knew they were looking for a fairly modest effect. You say that it would be hard to nudge the subjective endpoint enough o get a bigger difference, Simon, so a bigger difference would have been convincing. But the sad truth is that this is no good. People lie and fiddle data all the time in science. The main motivation for both therapists and patients in a trial like this is not truth - it is some personal agenda, often rather pressing - job, or continued care. Principle investigators may be seekers after the truth if you are lucky, and single site trials can be fairly free of gerrymandering, but with multicentre trials all hell is let loose. I know from experience. Your only hope is blinding treatments because fiddling then just becomes noise rather than factitious results. Over the years in drug trials we have learnt the hard way. Looking back I realise I used to fiddle data, not really knowing I was, all the time before we started doing all samples blind etc.
There is also something very odd about CBT. You used the words 'showing CBT worked', Simon. But what we want to know is whether it 'works' or more specifically will work. The problem is that we have absolutely no way of measuring whether the CBT used in GET was the same as will be used next time. For drugs we know the chemical formula. For CBT we know pretty much nothing.
CBT is not just the content of information and rational argument delivered by a therapist. If it was then all that would be needed would be for patients to be given books or videos explaining it all. What the trial purported to test was the value of the additional interview process with the therapist. But, as an eminent colleague of the PACE authors pointed out to me, there are virtually no therapists in the UK trained to provide CBT of the sort recommended. And of course what is worse still is that we have no way of knowing whether or not the difference matters. So there would have been no scientific content in showing 'CBT worked' because it would not be generalisable to prediction.
And of course this is a double edged sword because if that eminent colleague is right then for all we know most of the patients in the PACE trial had 'incompetent CBT' so the trial did not show that CBT is of no use.
Coming to Woolie's point, I agree that we need different controls, but I find it hard to believe that dummy therapies will be any good. As you say this would need to be promoted as heavily as the test therapy. But how are you going to get therapists to convince patients that they are themselves convinced of the value of the therapy equally for test and dummy. It is pretty easy to tell if someone is bullshitting. When I looked into this for physiotherapy I concluded that one would have to recruit new individuals with no physio theory or practice training and teach them boht test and dummy procedures without telling them which was being tested in the trial - i.e. they would not even be allowed to see the title of the trial. And for psychotherapy I suspect that what effect there is is very much dependent on the patient thinking that the therapist genuinely has long experience of the treatment and its results in previous research or practice. So bringing in 'virgin therapists' would defeat the object. I honestly do not see a way to do it that would convince those of us who have seen how bias leaks in everywhere unless you make it impossible.
And coming back to measures, I agree we need a range, but as said above, if we want to avoid Bonferoni and suchlike and arguments about what really matters I think psychology needs to follow something like the ACR system, which requires both subjective and objective hurdles to be crossed. Maybe such instruments exist but I have not heard of them for ME.