Discussion in 'Latest ME/CFS Research' started by biophile, Nov 16, 2013.
So the PIs and a group of people they could order around (I imagine). Maybe this is normal enough.
They don't mention changes e.g. 1 a & b are altered (see protocol below)
(I could miss bits)
Some of the secondary efficacy outcomes were dropped or changed (no reason given here, nor is it highlighted):
(I could miss bits)
is a useful table to keep track of what they measured and when.
A little description:
Dutch research shows CBT fails to achieve this (on average).
Also, shows why actometers as an outcome should have been kept.
Confirmation of what we knew: the principal investigators (PDW, MS, TC) has plenty of ways to get a feel on how things were going.
For letter writers:
I notice they use the acronym SF-36PF for SF-36 physical functioning subscale. This is one worth shorter than SF-36 PF.
I was just reading some of the publication history/reviewer comments (I've been putting off something more useful):
Looks like an admission that the weak MCID's were just pulled out of their arses:
I didn't really follow this bit (stats is not a strong point):
Here's a copy of that section from their original submission (not checked if anything changed for final paper (pardon the funny formatting - I fixed a lot!):
Has that info been released? I don't remember histograms for SF36-PF and Chalder Fatigue outcomes?
I wonder if this data would have been of interest:
Thanks for that, Esther12.
I don't recall seeing any histograms of them.
(Possibly me being overly awkward) I find it slightly interesting to reflect on the fact that they justified having such a big trial, which cost a lot of money (£5 m), including close to £1m (I think it was) extra for an extension to get more participants, based on power calculations which then were no longer relevant as they changed how they analysed the trial (using continuous rather than categorical measures).
Put another way, they might have been able to do what they did in a way that cost £1 or £2 million less.
None of this was reported despite the paper on the health economics http://www.plosone.org/article/info:doi/10.1371/journal.pone.0040808 being published in PLOS One
Minor point probably
If one looks at:
one can see it mentions summary data e.g.
(people rated these individually on scales that weren't yes/no)
they aren't in a collapsed/summary form:
However, in the paper, they did collapse them:
I'm not an expert on equipoise but am not sure in this case
I'm not convinced this was the position of the investigators.
For example, why is APT treated differently from CBT and GET in the analysis e.g. APT is compared to CBT and APT is compared to GET but CBT isn't compared to GET generally.
The trial protocol had:
If I recall correctly, there is material in the manuals that promotes CBT and GET.
It's not much good recording expectations if you don't publish them.
They also said:
Perhaps it might be worthwhile somebody requesting this data.
I can't see any place where this is given (I've also checked the appendix of the Lancet paper).
If I recall correctly, the CONSORT trial profile (Figure 1) often/usually has such data (indeed, I think CONSORT encourages it?), but all we get is "withdrew" and the numbers.
I think this one might be reasonably important:
If one looks at the Lancet paper, it appears they have only done it for fatigue and disability and not participant-rated CGI.
They repeat in point 4 that CGI is supposed to be adjusted for:
One can see them doing it for fatigue and disability in Table 3. All apart from one means the numbers are multiplied by 5 (the odd one out is a p value of .38 which then becomes .99. p-values can only be between 0 and 1 so this makes sense.
However, Table 5 has: Participant-rated clinical global impression of change in overall health
Odds ratio (positive change vs negative or minimum changes)
Compared with specialist medical care
APT: 1·3 (0·8–2·1); p=0·31
CBT: 2·2 (1·2–3·9); p=0·011
GET: 2·0 (1·2–3·5); p=0·013
Compared with adaptive pacing therapy
CBT: 1·7 (1·0–2·7); p=0·034
GET: 1·5 (1·0–2·3); p=0·028
I think both of the CBT and both of the GET results would no longer be statistically significant with bonferroni adjustment (basically multiplying by 5 at that level)
I wonder what they meant/had in mind with regard to consumer opinions. They didn't show much interest or empathy with opinions expressed on the papers.
The consistency of effects points is interesting. If one looks at objective measures, the results are fairly consistent for CBT with no benefit over APT or SMC. GET did improve with regard to the 6 minute walking test, but not dramatically.
Quite a lot of analyses have not been reported, as highlighted. There was quite a lot of opportunities to do it e.g. the Lancet paper had an Appendix, no word limit for Plos One paper, the trial had its own website, etc.
Probably not that important:
I don't think this was done.
I was just thinking a bit more about the lack of the histograms:
By not publishing histograms, it enabled them to make claims about "return to normal" that others couldn't easily see were dubious.
Similarly with regard to recovery, histograms would likely show that recovery (at 85/90/95/100 in the SF-36 PF, for example) was not that common.
These might be good things to do a Freedom of Information Act request on. These should already have been created and thus there should be no work involved in anyone preparing them.
Given the level of missing data for the six minute walking test, it is disappointing no information has been given on this including analyses to see whether the data was missing randomly or whether the group without data were different in some way to the group that completed the test.
I think the level of missing data is much more common for the 6 minute walking test compared to the other measurements, although I don't imagine pro-rating was done.
(i) Losses to follow-up weren't reported by centre, as far as I know.
(ii) No narrative summaries of the reasons were given.
You can also try a Google Site Search
Separate names with a comma.