One could say in a region of £1 million was thus wasted on an overpowered study.
Dolphin's quoted statement is hard to find in this thread, but it illuminates an internal contradiction in PACE. They went to great lengths and expense to make sure the study had adequate statistical power to demonstrate subjective results they wanted to see. At the same time they were so careless with objective tests that they didn't worry about 1/3 of all subjects declining to participate in the 6-minute walk. "Do you feel worse after therapy? Don't worry, we won't test you."
There is a pattern to this manipulation.
Do actimeters show patients displacing activity from a fixed energy budget?
Drop actimeters entirely.
Do patients report setbacks following exercise that last one week?
Change criteria for adverse responses to require over two weeks of setback.
Do preliminary results fail to validate your preconceptions?
Change entry criteria for the study without changing recovery criteria.
The only statistically-significant results which remain are those which demonstrate that researchers can bullyrag patients into saying they are better, if the consequences of disagreeing might be loss of benefits for noncompliance with treatment.
If you flatly refuse to notice that anyone might be made worse, except by the traditional medical standard of dropping dead, you can always find random variation in a positive direction. Do not expect to validate such results. Do not expect them to last.