• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

biophile

Places I'd rather be.
Messages
8,977
No previous submission still doesn't answer the question of the several month gap between expected publication and actual publication. Leaves open the possibility of extra effort required for spin doctoring once the disappointing results came in, since CBT and GET were only about half as effective as the authors expected. I'd still like to know when the authors were unblinded to the data.
 

Battery Muncher

Senior Member
Messages
620
I don't think this would make much difference to our situation:

Results of publicly funded research will be open access science minister
http://www.guardian.co.uk/science/2011/dec/08/publicly-funded-research-open-access

The government say they want all publicly-funded research published in open access journals.
But I thought this was already supposed to be the case.
We need the data to be open-source, not just the results!

Yep, no more than a half-measure. Shame, because if they agree to make data open as well, it would be so helpful....
 

biophile

Places I'd rather be.
Messages
8,977
So about 6 months between White et al seeing the data and expected publication, which somehow became 9 months?

Also, have only skim read PACE section, but yes well done to Tom Kindlon for the paper on risks of harm in CBT/GET.
 
Messages
13,774
Really, it doesn't matter how long it took them to cook their data - we can see that they did, and drawing attention to that is what matters (and I think I'm feeling a bit frustrated by the difficulty of doing that - ah well.)
 

Dolphin

Senior Member
Messages
17,567
Tom Kindlon just put out a new, long, detailed report on CBT/GET: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501

I haven't more than skimmed it yet.
There's a section specifically on PACE in there, along with a lot of other stuff that would be relevant. I'm going to print it out, and give it a good read - it looks too thorough for internet browsing.
People might be interested in this extract:
The CBT group only increased from an average 6MWD of 333m to 354m, the same change as the SMC group; the GET cohort went
from 312m to 379m, or an (adjusted) increase of 35.3 metres compared to SMC. Both sets of
figures make one wonder what percentage of participants had a high rate of compliance,
especially when the final 6MWDs were still much lower than 644m, the predicted value for an
age- [39 years] and gender-matched [77% female] cohort of average height [176.5cm (male),
163.1cm (female)] (226,227).
 
Messages
13,774
I've just found some old notes from the SF36 PF data paper that was used to justify the claim that those scoring just 60 were "back to normal". I'd forgotten most of them (curse my feeble mind), and can't remember if I mentioned them before, but thought I'd post them up.

It is possible that if disease-specific questions are asked before generic health status items, then respondents' generic health status ratings would be more favourable. This is because the disease items had already been considered by respondents and therefore excluded in replies to the generic items.

I wonder whether Chalder Fatigue questionnaires were given before or after Sf36 pf with PACE?

...

Sorry - too tired/lazy. The PDF won't let me copy and paste, and typing stuff out was a bit much right now (it was bizarrely tiring).

There's another possibly interesting section on the bottom of page 262 on social desirability bias, but none of this is directly relevant, or likely to have been very significant.

http://jpubhealth.oxfordjournals.org/content/21/3/255.full.pdf+html

Thanks for the extract Dolphin.
 

Dolphin

Senior Member
Messages
17,567
I've just found some old notes from the SF36 PF data paper that was used to justify the claim that those scoring just 60 were "back to normal". I'd forgotten most of them (curse my feeble mind), and can't remember if I mentioned them before, but thought I'd post them up.



I wonder whether Chalder Fatigue questionnaires were given before or after Sf36 pf with PACE?

...

Sorry - too tired/lazy. The PDF won't let me copy and paste, and typing stuff out was a bit much right now (it was bizarrely tiring).

There's another possibly interesting section on the bottom of page 262 on social desirability bias, but none of this is directly relevant, or likely to have been very significant.

http://jpubhealth.oxfordjournals.org/content/21/3/255.full.pdf+html

Thanks for the extract Dolphin.
Thanks.
Bowling et al pdf seems to be locked to block copying and pasting.

I think "order effects" is the term used to describe how the order of the questions (it is usually used within a questionnaire I think) can have an effect/biasing effect.

IACFS/ME paper talks about social desirability bias a little including with some references, for anyone who missed it: http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Order effects?
question 1: have you come across any so-called scientific studies where the methodology and conclusions are suspect?
question 2: have you come across any so-called scientific studies where criteria are changed significantly, and after early data has been collected?
question 3: what do you think of the reliability of the study on CFS known as the PACE trial which felt that it was scientific?

No - it seems a perfectly reasonable strategy. I'm great at devising unbiased questionnaires if any of you want to employ me to find out the truth. I can do multi-choice as well.
 

Sean

Senior Member
Messages
7,378
From Tom's paper:

"Simultaneously, therapists were told that adverse effects of GET were due to inappropriately planned or progressed exercise programmes without mention of what these effects were and were instructed to encourage patients to focus on their symptoms less (33). This sort of priming could easily lead to instances of symptoms not being reported. Previous research has demonstrated that questionnaires, and especially interviews, addressing sensitive topics, including physical activity, are susceptible to social desirability response bias (154-157,217). Participants consciously or unconsciously (self-deception) present inaccurate information about themselves to conform to what they believe the researcher expects. Participants may also worry that their performance or demeanor during the trial might negatively influence their physicians treatment of them (the consent form included release of information to patients regular healthcare providers) and receipt of disability benefits. Thus, given the way information was conveyed to both patients and therapists, it is possible that there may be underreporting of adverse effects."

"Also, there has not been space to cover some of the possible effects, such as coercion of patients to participate in GET or CBT, that poor reporting might produce."

I have long thought that one factor in the so called 'successes' of the CBT/GET approach that needs much more careful examination is the 'priming' effect. The potential for (and actuality of) undue pressure of various sorts on patients to report a favourable outcome (or to not report unfavourable outcomes), and the reasons for that, has not been properly factored into the (subjectively assessed) 'positive' results from psycho-social based trials.

Given the lack of support from objective measures for the positive subjective results from the psycho-social model, this factor needs close and urgent examination. I don't believe the findings from that will be comforting for the psycho-social advocates.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Tom Kindlon asked me to post this for him.

(May be re-posted/please re-post)

Given all the positive feedback that I have received since Friday, I
would like to acknowledge some more people who gave input at various
stages of my recently published paper* [along with Lily Chu, the
reviewers and Amberlin Wu RIP, all of whom I thanked already in the
paper].

I first started writing it August 2010 and it went through many drafts.
People gave various amounts of input but even reading the paper once
and giving one or two comments (most people did more than this) took
some effort given the length.

So thanks to:
(in alphabetical order by first name)
Alison Deegan Kindlon Andrew Kewley, Clara Valverde, Deborah Waroff, Ellen
Goudsmit, George
Faulkner, Greg & Linda Crowhurst, Jane Colby, Janelle Wiley, Jennie
Spotila, Joan Crawford, Karen M. Campbell, Karl Krysko, Kelly Latta,
Pat Fero, Peter Kemp, Sean**, Simon McGrath & Susanna Agardy.
Also thanks to some other people who gave help but who felt their
input wasn't enough to be mentioned. Thanks too to many other people I
have learned from over the years.

Regards,

Tom Kindlon


* Kindlon T. Reporting of Harms Associated with Graded Exercise
Therapy and Cognitive Behavioural Therapy in Myalgic
Encephalomyelitis/Chronic Fatigue Syndrome. Bulletin of the IACFS/ME,
Fall 2011;19(2):59-111.
Available at: http://bit.ly/rZCSMW i.e.
<http://www.iacfsme.org/BULLETINFALL2011/Fall2011KindlonHarmsPaperABSTRACT/t
abid/501/Default.aspx>

** That's all he wanted.
 

oceanblue

Guest
Messages
1,383
Location
UK
Tom Kindlon's critique of PACE harm reporting

I'm late to this party but a couple of points stood out for me:

1. Changes to adverse outcomes

PACE made it harder for deterioration to count as an adverse effect:
(p81) The researchers did not explain why they changed or did not report on pre-specified adverse
outcomes from the 2006 PACE Final Protocol. Originally, adverse outcomes were defined as a
score of 5-7 on the self-rated Clinical Global Impression (PCGI) or a drop of 20 points on the
SF-36 physical function score (187) from the prior measurement (201). By the time the Lancet
paper was published, serious deterioration in health is defined as (90):
a short form-36 physical function score decrease of 20 or more between baseline and any
two consecutive assessment interviews; scores of much or very much worse on the
participant-rated clinical global impression change in overall health scale at two
consecutive
assessment interviews...

(p82)... Instead, a serious deterioration now necessitates a change from the baseline score at two consecutive assessment interviews. Given that there are 12 weeks between the first and second assessment and 26 weeks
between the second and third assessment and that the baseline scores for the four arms of the
trial all averaged below 40, a participants score would on average need to sustain a drop of
more than 50% of their function over a period of at least 12 weeks to qualify as a serious
deterioration in health
.

Another effect of this change is that any declines after 24 weeks would not be counted as there is only one more assessment, at 52 weeks, after 24 weeks.

2. Need for consistency in the reporting of improvements and deteriorations

Basically, if a certain gain is a significant improvement, then a fall by the same amount should be reported as a significant deterioration.

PACE used 0.5SD as its threshold for a clinically useful difference
(p82) The justification for using the threshold of 0.5SD threshold comes from a 2002 paper by Guyatt
et al. but Guyatt also points out that the same threshold could be used for deteriorations (223);
unfortunately data on such deteriorations (e.g. participants who declined 8 points on the SF-36)
are not given. Likewise, if it was felt one could not be confident a deterioration had occurred
based on a measurement at one point in time, it suggests one should also probably not be
confident a participant has improved (the phrase in the paper) using one time point
.

... it would seem reasonable if there was
consistency in the reporting of improvements and deteriorations, with symmetrical clinically
useful differences scores and time periods unless there are clear rationales given to do otherwise.

There's more good stuff in there too, but I'm done for now.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Something has come up on another thread. I recall a re-evaluation of statistics from an early study, possibly by White. Biophile raised the question as to whether I was recalling:

CLOSE ANALYSIS OF A LARGE PUBLISHED COHORT TRIAL INTO FATIGUE SYNDROMES AND MOOD DISORDERS THAT OCCUR AFTER DOCUMENTED VIRAL INFECTION, D.P. Sampson, BSc (Hons), MSc, MBPsychS

I don't think so, but I am not sure. I recall a study out of DePaul university, possibly Jason (but can't see it on his list of papers), around 2008. It re-evaluated earlier study where different data sets were combined, similar to that discussed in Sampson. I think they did a statistical re-analysis and found that there was no benefit from CBT/GET. I thought the original study was a White study, circa 2003.

Am I mis-remembering? Does anyone else recall this study? It is nearly 5am Christmas morning, I have been looking for it for three hours, it would be nice to know if my memory is fubar enough to get this wrong. I could be mis-remembering the Sampson study, but I thought it worth checking to see if anyone else has a clue about what my memory is insisting is correct.

Bye, Alex
 

Dolphin

Senior Member
Messages
17,567
Something has come up on another thread. I recall a re-evaluation of statistics from an early study, possibly by White. Biophile raised the question as to whether I was recalling:

CLOSE ANALYSIS OF A LARGE PUBLISHED COHORT TRIAL INTO FATIGUE SYNDROMES AND MOOD DISORDERS THAT OCCUR AFTER DOCUMENTED VIRAL INFECTION, D.P. Sampson, BSc (Hons), MSc, MBPsychS

I don't think so, but I am not sure. I recall a study out of DePaul university, possibly Jason (but can't see it on his list of papers), around 2008. It re-evaluated earlier study where different data sets were combined, similar to that discussed in Sampson. I think they did a statistical re-analysis and found that there was no benefit from CBT/GET. I thought the original study was a White study, circa 2003.

Am I mis-remembering? Does anyone else recall this study? It is nearly 5am Christmas morning, I have been looking for it for three hours, it would be nice to know if my memory is fubar enough to get this wrong. I could be mis-remembering the Sampson study, but I thought it worth checking to see if anyone else has a clue about what my memory is insisting is correct.

Bye, Alex
Is this what you are thinking of?
Song S, Jason LA. A population-based study of chronic fatigue syndrome (CFS)
experienced in differing patient groups: An effort to replicate Vercoulen et al's model of CFS. J
Ment Health 2005, 4:277-289

DePaul studies are generally listed here:
http://condor.depaul.edu/ljason/cfs/

A lot of them aren't PubMed-listed so that's a better place to look for their studies I think.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Thanks Dolphin, I had already looked there but I don't recall the name of the paper, so its a problem - it makes it hard to identify. It is also hard to get the text of that paper online, so I can't be sure. Bye, Alex