Thanks.Game over.Somebody has just got in touch and showed me the E-mail Peter White sent to them where he said it hadn't been submitted to another journal.
Thanks.Game over.Somebody has just got in touch and showed me the E-mail Peter White sent to them where he said it hadn't been submitted to another journal.
I don't think this would make much difference to our situation:
Results of publicly funded research will be open access science minister
http://www.guardian.co.uk/science/2011/dec/08/publicly-funded-research-open-access
The government say they want all publicly-funded research published in open access journals.
But I thought this was already supposed to be the case.
We need the data to be open-source, not just the results!
Tom Kindlon just put out a new, long, detailed report on CBT/GET: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501
I haven't more than skimmed it yet.
Spring 2011, I think. There was a statement on it at the time.I'd still like to know when the authors were unblinded to the data.
I'm guessing you mean Spring 2010. Sounds familiar.Spring 2011, I think. There was a statement on it at the time.
Oops! yes.I'm guessing you mean Spring 2010. Sounds familiar.
People might be interested in this extract:There's a section specifically on PACE in there, along with a lot of other stuff that would be relevant. I'm going to print it out, and give it a good read - it looks too thorough for internet browsing.Tom Kindlon just put out a new, long, detailed report on CBT/GET: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501
I haven't more than skimmed it yet.
The CBT group only increased from an average 6MWD of 333m to 354m, the same change as the SMC group; the GET cohort went
from 312m to 379m, or an (adjusted) increase of 35.3 metres compared to SMC. Both sets of
figures make one wonder what percentage of participants had a high rate of compliance,
especially when the final 6MWDs were still much lower than 644m, the predicted value for an
age- [39 years] and gender-matched [77% female] cohort of average height [176.5cm (male),
163.1cm (female)] (226,227).
It is possible that if disease-specific questions are asked before generic health status items, then respondents' generic health status ratings would be more favourable. This is because the disease items had already been considered by respondents and therefore excluded in replies to the generic items.
Thanks.I've just found some old notes from the SF36 PF data paper that was used to justify the claim that those scoring just 60 were "back to normal". I'd forgotten most of them (curse my feeble mind), and can't remember if I mentioned them before, but thought I'd post them up.
I wonder whether Chalder Fatigue questionnaires were given before or after Sf36 pf with PACE?
...
Sorry - too tired/lazy. The PDF won't let me copy and paste, and typing stuff out was a bit much right now (it was bizarrely tiring).
There's another possibly interesting section on the bottom of page 262 on social desirability bias, but none of this is directly relevant, or likely to have been very significant.
http://jpubhealth.oxfordjournals.org/content/21/3/255.full.pdf+html
Thanks for the extract Dolphin.
"Simultaneously, therapists were told that adverse effects of GET were due to inappropriately planned or progressed exercise programmes without mention of what these effects were and were instructed to encourage patients to focus on their symptoms less (33). This sort of priming could easily lead to instances of symptoms not being reported. Previous research has demonstrated that questionnaires, and especially interviews, addressing sensitive topics, including physical activity, are susceptible to social desirability response bias (154-157,217). Participants consciously or unconsciously (self-deception) present inaccurate information about themselves to conform to what they believe the researcher expects. Participants may also worry that their performance or demeanor during the trial might negatively influence their physicians treatment of them (the consent form included release of information to patients regular healthcare providers) and receipt of disability benefits. Thus, given the way information was conveyed to both patients and therapists, it is possible that there may be underreporting of adverse effects."
"Also, there has not been space to cover some of the possible effects, such as coercion of patients to participate in GET or CBT, that poor reporting might produce."
(May be re-posted/please re-post)
Given all the positive feedback that I have received since Friday, I
would like to acknowledge some more people who gave input at various
stages of my recently published paper* [along with Lily Chu, the
reviewers and Amberlin Wu RIP, all of whom I thanked already in the
paper].
I first started writing it August 2010 and it went through many drafts.
People gave various amounts of input but even reading the paper once
and giving one or two comments (most people did more than this) took
some effort given the length.
So thanks to:
(in alphabetical order by first name)
Alison Deegan Kindlon Andrew Kewley, Clara Valverde, Deborah Waroff, Ellen
Goudsmit, George
Faulkner, Greg & Linda Crowhurst, Jane Colby, Janelle Wiley, Jennie
Spotila, Joan Crawford, Karen M. Campbell, Karl Krysko, Kelly Latta,
Pat Fero, Peter Kemp, Sean**, Simon McGrath & Susanna Agardy.
Also thanks to some other people who gave help but who felt their
input wasn't enough to be mentioned. Thanks too to many other people I
have learned from over the years.
Regards,
Tom Kindlon
* Kindlon T. Reporting of Harms Associated with Graded Exercise
Therapy and Cognitive Behavioural Therapy in Myalgic
Encephalomyelitis/Chronic Fatigue Syndrome. Bulletin of the IACFS/ME,
Fall 2011;19(2):59-111.
Available at: http://bit.ly/rZCSMW i.e.
<http://www.iacfsme.org/BULLETINFALL2011/Fall2011KindlonHarmsPaperABSTRACT/t
abid/501/Default.aspx>
** That's all he wanted.
(p81) The researchers did not explain why they changed or did not report on pre-specified adverse
outcomes from the 2006 PACE Final Protocol. Originally, adverse outcomes were defined as a
score of 5-7 on the self-rated Clinical Global Impression (PCGI) or a drop of 20 points on the
SF-36 physical function score (187) from the prior measurement (201). By the time the Lancet
paper was published, serious deterioration in health is defined as (90):
a short form-36 physical function score decrease of 20 or more between baseline and any
two consecutive assessment interviews; scores of much or very much worse on the
participant-rated clinical global impression change in overall health scale at two
consecutive assessment interviews...
(p82)... Instead, a serious deterioration now necessitates a change from the baseline score at two consecutive assessment interviews. Given that there are 12 weeks between the first and second assessment and 26 weeks
between the second and third assessment and that the baseline scores for the four arms of the
trial all averaged below 40, a participants score would on average need to sustain a drop of
more than 50% of their function over a period of at least 12 weeks to qualify as a serious
deterioration in health.
Another effect of this change is that any declines after 24 weeks would not be counted as there is only one more assessment, at 52 weeks, after 24 weeks.
(p82) The justification for using the threshold of 0.5SD threshold comes from a 2002 paper by Guyatt
et al. but Guyatt also points out that the same threshold could be used for deteriorations (223);
unfortunately data on such deteriorations (e.g. participants who declined 8 points on the SF-36)
are not given. Likewise, if it was felt one could not be confident a deterioration had occurred
based on a measurement at one point in time, it suggests one should also probably not be
confident a participant has improved (the phrase in the paper) using one time point.
... it would seem reasonable if there was
consistency in the reporting of improvements and deteriorations, with symmetrical clinically
useful differences scores and time periods unless there are clear rationales given to do otherwise.
Is this what you are thinking of?Something has come up on another thread. I recall a re-evaluation of statistics from an early study, possibly by White. Biophile raised the question as to whether I was recalling:
CLOSE ANALYSIS OF A LARGE PUBLISHED COHORT TRIAL INTO FATIGUE SYNDROMES AND MOOD DISORDERS THAT OCCUR AFTER DOCUMENTED VIRAL INFECTION, D.P. Sampson, BSc (Hons), MSc, MBPsychS
I don't think so, but I am not sure. I recall a study out of DePaul university, possibly Jason (but can't see it on his list of papers), around 2008. It re-evaluated earlier study where different data sets were combined, similar to that discussed in Sampson. I think they did a statistical re-analysis and found that there was no benefit from CBT/GET. I thought the original study was a White study, circa 2003.
Am I mis-remembering? Does anyone else recall this study? It is nearly 5am Christmas morning, I have been looking for it for three hours, it would be nice to know if my memory is fubar enough to get this wrong. I could be mis-remembering the Sampson study, but I thought it worth checking to see if anyone else has a clue about what my memory is insisting is correct.
Bye, Alex
Song S, Jason LA. A population-based study of chronic fatigue syndrome (CFS)
experienced in differing patient groups: An effort to replicate Vercoulen et al's model of CFS. J
Ment Health 2005, 4:277-289