1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Join the National PR Campaign for ME: Power to the Patient (P2tP)
Have you had enough of all the neglect and abuse of ME/CFS patients? Gabby Klein says now is the time for a National PR Campaign for ME/CFS to impress a change. Join the Patient Revolution to restore power to ME patients ...
Discuss the article on the Forums.

PACE Trial statistical analysis plan

Discussion in 'Latest ME/CFS Research' started by biophile, Nov 16, 2013.

  1. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    Probably not important:

    They didn't do this:
    (i.e. "Therapy sessions attended†" and "Specialist medical care sessions attended").

    It is slightly odd they didn't report this. If one looks at:
    part i (just above it):
    They did report median and lower and upper quartile (i.e. IQR), so they could easily have done so in the same table which makes me wonder whether they wanted to hide it e.g. perhaps the figures for CBT or GET were different in some way.

    They didn't report minimum and maximum for either (i) or (ii) either.
     
    Valentijn likes this.
  2. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
     
    Last edited: Dec 23, 2013
  3. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    Not sure if it's that important but here's the CONSORT flow diagram from the statistical plan and then what they actually published.

    I haven't checked the bits before randomised, but they're quite a bit different below that. The box for "Allocation Care Providers" is not in the Lancet one.

    [​IMG]

    [​IMG]
     
  4. Snow Leopard

    Snow Leopard Senior Member

    Messages:
    2,411
    Likes:
    2,060
    Australia

    Would be hard for them to argue their way out of that lol

    I bet there are practitioner effects also, I strongly suspect CBT/GET was delivered in a more positive way than APT.
     
    Dolphin and Valentijn like this.
  5. user9876

    user9876 Senior Member

    Messages:
    792
    Likes:
    1,942
    I assumed with the consistency of effects they would be talking about the profile of how patients were affected - the consistency of effect over the patient population. i.e. the looking at the distributions that they have not published. When I first read the lancet paper (before reading anything else about the trial) I thought the increased SDs suggested they had some people with better scores but others with no change or deterioration. I met a hypnotist once who said he works on the basis that some people are very suggestible and will comply with his requests. Others he had no effect on.
     
    Dolphin likes this.
  6. Simon

    Simon

    Messages:
    1,525
    Likes:
    4,885
    Monmouth, UK
    Thanks for all your work and analysis on this
    Wow, I think that s very important, not least when it comes to looking for consistency across outcomes. This means that neithe CGI rating nor 6MWT improved significantly for CBT (and remember that SF36 Physical Function didn't improve by a clinically Useful Amount relative to control either. For GET, CGI didn't improve significantly while 6MWT only improved by a small amount.

    i agree it would be good to do an FOI for these. However, I'm not sure I would have expected then to publish the histograms; I think presented could mean simply produced as a standard step in any stats analysis, though still worth asking for.
     
  7. Snow Leopard

    Snow Leopard Senior Member

    Messages:
    2,411
    Likes:
    2,060
    Australia
    I personally think that the magic "less than 0.05=significant" is nonsense. It is clear there is a difference and Bonferroni adjustment is not needed in this sense. Of course it is still not a large difference.

    But I do feel frustrated that certain people put so much stock into the minor changes on self-report questionnaires in this trial, yet if it was a non-blinded pharmacological trial they would repeatedly tell us that the results are meaningless. It is a double standard.

    Addition:

    If someone ever asks me about the PACE trial and CBT/GET in general, I give them a simple answer. I say that if it was a drug, it would not be approved as the evidence base is poor. The reason why the evidence base is poor is because there have been no blinded trials and there is no evidence of objective improvements, which are necessary to demonstrate efficacy in non-blinded trials. Bogging down in the details often just confuses people, so that is the message I give.

    Remember the Ampligen results? The CBT/GET results are no better when taken on the same level (not blinded etc).
     
    Last edited: Dec 23, 2013
  8. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    I have mixed feelings about Bonferroni adjustments. They seem too severe. Similarly making p<0.05 a strict cut-off is far from ideal.

    However, it is they who promised making the Bonferroni adjustment for the CGI and then didn't do it in an case that didn't suit them (but did in the other two cases (SF36PF and CFQ)). Also, they are making claims based on p<0.05 in other areas and don't report most effect sizes (or odds ratios and the like).

    ETA: I've just noticed they did publish odds ratios for the CGIs and one can see how marginal the comparisons are with APT by the fact that both CIs include 1.0.

    So I accept your basic point about objective measures, but still think it is interesting to highlight what they did and didn't do.
     
    Last edited: Dec 23, 2013
    Valentijn likes this.
  9. Simon

    Simon

    Messages:
    1,525
    Likes:
    4,885
    Monmouth, UK
    I agree that it is too simplistic to say that p=0.051 means 'nothing doing' and p=0.049 means "woo-hoo, result!" as if these represent two entirely different worlds. I also agree that bonferroni is too severe, and there are other ways of correcting for mulitple comparisons that are less strict eg False discovery rate.

    But don't forget this is a very large study and any decent effect should be significant (confidence intervals shrink as sample size increases); the fact that CGI and Physical Function for CBT failed to reach significance is further evidence that whatever is going on here isn't very impressive.
     
    Valentijn and Dolphin like this.
  10. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    Minor, I imagine:

    They didn't report part (ii) in full

    The line before in the statistical plan they said:
    I have no idea whether it's of any importance that did not adjust based on grade/type in the cost effectiveness analysis:
    The breakdown was:
    ------
    ETA:
    Actually the cost effectiveness paper has:
    and
    reported:
     
    Last edited: Dec 23, 2013
  11. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
  12. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    Comments in italics:
     
    Valentijn likes this.
  13. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    (Minor)
    This is after they listed the primary analyses.
    I'm not sure they did it?
    I think the primary ones wouldn't have been significant anyway.
     
    Last edited: Dec 23, 2013
  14. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    (Minor)
    As has been highlighted in other threads/information, we only got serious adverse reactions by intervention.

    The only time period reported on was 0 to 52 weeks.
     
    Valentijn likes this.
  15. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    (Possibly minor)

    They did report the numbers with "Withdrawn due to worsening" which they define further as: "withdrawal from treatment due to explicit worsening, or a serious adverse reaction".
    I presume that covers all withdrawals due to adverse events but can we be sure? They use "serious adverse reaction" which means an assessor has to have decided the adverse event was due to the treatment. But a patient might have felt an adverse event was due to a treatment.

    They don't report info on the second part:
    ----
    General point: I haven't looked at CONSORT flow charts closely but I find the one they did unsatisfactory and not as detailed as some I saw where they said what withdrawals were due to.

    If people withdraw from a trial because a therapy has made them worse, that can bias results. On the other hand, the numbers of withdrawals were small in the trial.

    I can't remember how they might have dealt with such situations (some trials use a form of intention-to-treat where baseline values are carried forward; others (if I recall correctly) use sensitivity analyses where they look at assuming bad results for missing data.
    ----
    ETA:
    As pointed out above, they haven't reported the reasons for discontinuation.

    This sounds like it might be a paper?
     
    Last edited: Dec 23, 2013
    Valentijn likes this.
  16. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    "Web Appendix Table D: Description of Serious Adverse Events" was not tabulated in relation to the intervention.
     
  17. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    (the first paragraph here can probably be skipped)

    These (i.e. in the last paragraph) weren't discussed in the cost effectiveness paper

    This wasn't reported on.
     
  18. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    The zero cost analysis doesn't get a mention in the cost-effectiveness paper:

    SMcGrath highlighted the problems with such claims if unpaid care was calculated at the minimum wage in a post here:
    The lead author essentially agreed with SMcGrath, giving some data.
     
  19. Dolphin

    Dolphin Senior Member

    Messages:
    6,868
    Likes:
    6,148
    [TC=Trudie Chalder]

    This contrasts with the Lancet paper
     
    Valentijn likes this.
  20. Snow Leopard

    Snow Leopard Senior Member

    Messages:
    2,411
    Likes:
    2,060
    Australia
    Though I don't agree with the practise, (since space is no longer limited in many journals) It was typical not to report all of the stuff that is in an analysis plans and protocols.

    The practise does indeed invite cherry picking, which is clearly evident in the reporting of the PACE trial. I am suprised about the level of "will be reported" data that haven't been reported almost 3 years later. Given the time that has passed, I wonder what their excuse is!?!
     
    Valentijn likes this.

See more popular forum discussions.

Share This Page