A couple of quick points; sadly I have failed to keep up with the thread due to distractions so apologies if this has been noted.
There has been discussion on Bad Science of the possibility that the patients could have showed a kind of placebo effect; the possibility that maybe they [said they] got a bit better just because they believed in the treatment. The point was made that the patients' pre-trial expectations of CBT were
lower than their expectations of the 'usual treatments', and the conclusion drawn that this suggests that effect didn't apply.
I suspect it may work like this. The outcome measures are ultimately all in the nature of questions asking "are you content with this treatment?" - indirectly that question is there behind all the measures because patients that feel positively towards the people involved are more likely to say positive things about them. So...
If one went into a treatment with very low expectations, any apparent positive experience would then seem more significant relative to expectation compared with the experience of going in with naively high hopes and getting absolutely nowhere. "Pleasantly surprised" versus "hopes dashed" could explain those statistically small effects, perhaps? If you don't expect much, you won't be disappointed...
More general point: I think it's time to get some Good Science onto the Bad Science forum. They have a little superficial discussion going, as always, but it is at least nominally science-based and surely, surely to goodness there are some points that could be made there by our top people that would make at least a
few people think again?
All that moving of goalposts halfway through, in particular - anything that would stand out clearly as bad practice to a proper scientist - anything stark like the facts about the actometers and the changing of outcome measures, the really killer points that suggest bad practice - that should be their stock in trade and I can't see how they'd wriggle out of that actometer point. On what planet is it good science to say clearly in public that you're going to measure things one way and then change all those definitions after your results come in so as to make your study say what you want it to say?
Note that any truth spoken there has to be definitively referenced and backed up: in general they're not interested in going looking to find out what truth there is in claims they are biased against, just in picking holes in anything they don't happen to like that doesn't fit the rules of their game - but some of them do occasionally seem to take bits of the truth on board when they're handed to them on a plate, in their own language...
There are people on Bad Science who are representing...but it could always use a few more, and this seems like as good a study to 'go large' on if we're talking Bad Science. The usual health warnings re:BS apply though: thick-skinned advocates only - their forum is dedicated to the "great british sport of moron-baiting", and many of them seem to like nothing better than mocking vulnerable people (they tend to lack empathy and they like/need to feel superior to someone, is my psychoanalysis of it), so don't anybody let 'em get to you - pegs on noses and sick bags at the ready...