@Dolphin
Not meant to be personal criticism. I'm quite aware that my thinking is not always what it should be.
Perhaps you can help me catch errors before I go further in making a fool of myself. Please correct any errors below. Here's the impression I get when I sit back from reading the PACE documents:
I am not sure how many patients improved, and what is considered a clinically significant benefit.
I am not sure how many patients declined, and what is considered a clinically significant harm.
I am not sure how the rate of hospitalizations compares to anything else, and I certainly don't know what caused them.
If the distributions in various arms of the study were normal, as commonly assumed, I could deduce a good bit from the two parameters published. If, as seems almost certain, they are not, then I have considerable doubt about the effect of outliers.
Were the modest objective improvements the result of a few positive outliers which had disproportionate effect? Could these be patients who should have had different diagnoses?
Were there any negative outliers at all? If so, were these examples of harm, or at least potential for harm?
What would it take for the authors to declare that a patient was harmed?
If you can't answer such questions after reading a scientific document claiming objective results, either the research is badly flawed or the write-up is absolutely atrocious.