Discussion in 'Latest ME/CFS Research' started by Dolphin, May 12, 2010.
No previous submission still doesn't answer the question of the several month gap between expected publication and actual publication. Leaves open the possibility of extra effort required for spin doctoring once the disappointing results came in, since CBT and GET were only about half as effective as the authors expected. I'd still like to know when the authors were unblinded to the data.
Tom Kindlon just put out a new, long, detailed report on CBT/GET: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501
I haven't more than skimmed it yet.
Yep, no more than a half-measure. Shame, because if they agree to make data open as well, it would be so helpful....
There's a section specifically on PACE in there, along with a lot of other stuff that would be relevant. I'm going to print it out, and give it a good read - it looks too thorough for internet browsing.
Spring 2011, I think. There was a statement on it at the time.
I'm guessing you mean Spring 2010. Sounds familiar.
So about 6 months between White et al seeing the data and expected publication, which somehow became 9 months?
Also, have only skim read PACE section, but yes well done to Tom Kindlon for the paper on risks of harm in CBT/GET.
Really, it doesn't matter how long it took them to cook their data - we can see that they did, and drawing attention to that is what matters (and I think I'm feeling a bit frustrated by the difficulty of doing that - ah well.)
People might be interested in this extract:
I've just found some old notes from the SF36 PF data paper that was used to justify the claim that those scoring just 60 were "back to normal". I'd forgotten most of them (curse my feeble mind), and can't remember if I mentioned them before, but thought I'd post them up.
I wonder whether Chalder Fatigue questionnaires were given before or after Sf36 pf with PACE?
Sorry - too tired/lazy. The PDF won't let me copy and paste, and typing stuff out was a bit much right now (it was bizarrely tiring).
There's another possibly interesting section on the bottom of page 262 on social desirability bias, but none of this is directly relevant, or likely to have been very significant.
Thanks for the extract Dolphin.
Bowling et al pdf seems to be locked to block copying and pasting.
I think "order effects" is the term used to describe how the order of the questions (it is usually used within a questionnaire I think) can have an effect/biasing effect.
IACFS/ME paper talks about social desirability bias a little including with some references, for anyone who missed it: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501
question 1: have you come across any so-called scientific studies where the methodology and conclusions are suspect?
question 2: have you come across any so-called scientific studies where criteria are changed significantly, and after early data has been collected?
question 3: what do you think of the reliability of the study on CFS known as the PACE trial which felt that it was scientific?
No - it seems a perfectly reasonable strategy. I'm great at devising unbiased questionnaires if any of you want to employ me to find out the truth. I can do multi-choice as well.
From Tom's paper:
I have long thought that one factor in the so called 'successes' of the CBT/GET approach that needs much more careful examination is the 'priming' effect. The potential for (and actuality of) undue pressure of various sorts on patients to report a favourable outcome (or to not report unfavourable outcomes), and the reasons for that, has not been properly factored into the (subjectively assessed) 'positive' results from psycho-social based trials.
Given the lack of support from objective measures for the positive subjective results from the psycho-social model, this factor needs close and urgent examination. I don't believe the findings from that will be comforting for the psycho-social advocates.
Tom Kindlon asked me to post this for him.
Tom Kindlon's critique of PACE harm reporting
I'm late to this party but a couple of points stood out for me:
1. Changes to adverse outcomes
PACE made it harder for deterioration to count as an adverse effect:
2. Need for consistency in the reporting of improvements and deteriorations
Basically, if a certain gain is a significant improvement, then a fall by the same amount should be reported as a significant deterioration.
PACE used 0.5SD as its threshold for a clinically useful difference
There's more good stuff in there too, but I'm done for now.
Something has come up on another thread. I recall a re-evaluation of statistics from an early study, possibly by White. Biophile raised the question as to whether I was recalling:
CLOSE ANALYSIS OF A LARGE PUBLISHED COHORT TRIAL INTO FATIGUE SYNDROMES AND MOOD DISORDERS THAT OCCUR AFTER DOCUMENTED VIRAL INFECTION, D.P. Sampson, BSc (Hons), MSc, MBPsychS
I don't think so, but I am not sure. I recall a study out of DePaul university, possibly Jason (but can't see it on his list of papers), around 2008. It re-evaluated earlier study where different data sets were combined, similar to that discussed in Sampson. I think they did a statistical re-analysis and found that there was no benefit from CBT/GET. I thought the original study was a White study, circa 2003.
Am I mis-remembering? Does anyone else recall this study? It is nearly 5am Christmas morning, I have been looking for it for three hours, it would be nice to know if my memory is fubar enough to get this wrong. I could be mis-remembering the Sampson study, but I thought it worth checking to see if anyone else has a clue about what my memory is insisting is correct.
Is this what you are thinking of?
DePaul studies are generally listed here:
A lot of them aren't PubMed-listed so that's a better place to look for their studies I think.
Thanks Dolphin, I had already looked there but I don't recall the name of the paper, so its a problem - it makes it hard to identify. It is also hard to get the text of that paper online, so I can't be sure. Bye, Alex
You can also try a Google Site Search
Separate names with a comma.