PACE Trial and PACE Trial Protocol

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
a bit OT but I thought some of you would appreciate this
voytek-running-arms.JPG

https://plus.google.com/113234126401294172677

of course, in PACE there was no "obliged to disregard... as outliers"

I like the version in the article (at the end of the quote) a bit better:

There is also the fallacy that poorly-defined (scientifically speaking) concepts such as "creativity" can be accurately studied neuroscientifically. How do you operationalize creativity, and more importantly how do you know that what you're seeing in the brain in response to your measure of creativity is the thing you think you're measuring? There's often no way to validate this.

When you ask something like "where is creativity in the brain" you assume that researchers can somehow isolate creativity from other emotions and behaviors in a lab and dissect it apart. This is very, very difficult, if not impossible. Neuroimaging (almost always) relies on the notion of cognitive subtraction, which is a way of comparing your behavior or emotion of interest (creativity) against some baseline state that is not creativity.

Imagine asking "where is video located in my computer?" That doesn't make any sense. Your monitor is required to see the video. Your graphics card is required to render the video. The software is required to generate the code for the video. But the "video" isn't located anywhere in the computer.

But if activity in that region increases as you're "more creative", clearly that's strong evidence for the relationship between that brain region and creativity, right?

Just like how when your arms swing faster when you run that means that your arms are "where running happens".
http://blog.ketyov.com/2012/06/defending-jonah-lehrer.html

A scientist who can think and criticise his own field. Imagine.
 

Don Quichotte

Don Quichotte
Messages
97
A scientist who can think and criticise his own field. Imagine.

This is one of the major differences among a good scientist and a bad scientist.

This kind of criticism of psychological research is not new.

In fact, I have heard this many years ago-

A behavioral psychologist trained a flea to jump, when he says-" jump" .
he then took off one of its legs and repeated the experiment.
The flee jumped less well, but still did it.
He then took off two legs, and then three.
The flee still managed to jump with one leg reasonably well.
So, he took that leg off.
He said-" jump" and the flee did nothing, so, he screamed at it louder and still nothing.
He then wrote- a flea with no legs is deaf.​
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Nothing new here, but on reading slides from an old White lecture, the PACE section stood out to me:

Primary Outcomes

Summary stats on fatigue and disability

Clinically significant?
Fatigue (50% reduction in fatigue or a score of 3 or less)
SF-36 (a score of 75 or 50% increase from baseline)

http://www.meactionuk.org.uk/Bergen-Treatment-2009.pdf

I think that's taken directly from the protocol Esther, so that was their original planned measure of success, which they abandoned for obvious reasons.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
I gather they were advised to up the sf-36 target to 75 when they upped the intake criterion to 65 or less. Originally, I believe, that when the intake criterion was 60 or less the target was 70. This is from memory, which as you know, is utterly infallible.;)

The alternative fatigue target of 3 or less was irrelevant. The entry requirement was a score of 6 or more, and the target was to halve that score. Anyone achieving 3 or less would automatically satisfy the requirement to halve their score. Another example of their lack of clarity of thought.
 

Esther12

Senior Member
Messages
13,774
Bob Yeah - it was just funny seeing these measures being promoted so heavily in a presentation, when they got abandoned and replaced with such watered down measures for the final paper. I think I only posted it for fun, rather than because it provided new information.

Graham - I have a similar memory too.
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
Have we seen this before? PDW uses GET in Lupus patients, rates them improved as measured by Chalder Fatigue Scale, but not by any of the other measures used (though claims a trend for everything in the abstract):

http://rheumatology.oxfordjournals.org/content/42/9/1050.long

they, however, were able to exercise longer after treatment. but their actual disease didn't improve, naturally.

maybe someone who is able to analyze the stats could double-check the claims here. not that I don't trust PDW or anything ;)

interestingly, they see fit to detail reasons for dropouts here. there are a significant number of them, though.
 

oceanblue

Guest
Messages
1,383
Location
UK
Have we seen this before? PDW uses GET in Lupus patients, rates them improved as measured by Chalder Fatigue Scale, but not by any of the other measures used (though claims a trend for everything in the abstract):

http://rheumatology.oxfordjournals.org/content/42/9/1050.long

they, however, were able to exercise longer after treatment. but their actual disease didn't improve, naturally.

maybe someone who is able to analyze the stats could double-check the claims here. not that I don't trust PDW or anything ;)

interestingly, they see fit to detail reasons for dropouts here. there are a significant number of them, though.
Just a few quick comments after skimming that paper:
  1. The Graded Exercise group showed no improvement in physical fitness level (treadmill test) relative to either the relaxation or no intervention control
  2. The Graded Exercise showed no improvement in Fatigue (Chalder or VAS) relative to the relaxation contorl - gains were relative to the 'no intervention' control so liable to self-report/placebo effects.
 

Dolphin

Senior Member
Messages
17,567
(Not very exciting)

This may have been said before but just came across a hand-written note I made once (before paper was published) so thought I'd throw it out: all the results on the 6 minute walking test improved: the SMC+CBT, SMC+APT and SMC by almost exactly the same amount (the SMC alone did best).

There are a few reasons why this could have happened that have been mentioned before I think: people "pacing"* the test better; natural average improvement over time among diagnosed people; drop-outs meaning some of the people who might struggle, struggle to do it well, might not do it; desire to show to oneself/others one has been a "good" patient and improved, etc.

Another mechanism might be that once people have done it once, they might be more focused in the run up to it e.g. take it a bit easier in advance of the test the day before and on the day of the test, just as one might be in a competitive race. Or people might take it easier the day before or earlier that day not because of their competitive instinct but they remember the payback from the earlier test and don't want to go through that again.

I think there was a Dutch study using pedometers which showed the day before an exercise test, activity decreased.

*"pacing" in the sense of a race i.e. not going too fast at the start and collapsing, or going slow at the start and having loads of energy at the end
 

biophile

Places I'd rather be.
Messages
8,977
Found a perfect new employee for PACE to manage their PR spin: http://www.news.com.au/business/wor...-media-nightmare/story-e6frfm9r-1226405644929 .
Alldis' blunt admission about how modern PR agencies try to work their spin into news stories triggered a wave of criticism and resulted in her issuing an apology.

Maybe they don't need her though, the PR on the PACE Trial was already so effective that it was regarded as the highest caliber of evidence and utterly flawless and conducted by caring researchers beyond reproach, whereas people who questioned it were generally regarded as hate-fueled irrational/emotional/ideological extremists and even prone to criminal activities. Seriously, it doesn't get much better than that in terms of PR.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
One of the ideas I am examing is the notion that a potential differentiator of science from pseudo-science is in how it is communicated. Science uses peer reviewed journals and carefully written accurate press releases. Pseudo-science uses pursuasive rhetoric and spin - its about convincing people by any means available.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
Yes, I'd tend to agree Alex, but where does that leave us and our PACE review? I'd try spinning, but I'd only fall over.
 

oceanblue

Guest
Messages
1,383
Location
UK
There's an excellent Nature article on open data (separate thread started here) with this quote, that might strike a chord:
Too often, we scientists seek patterns in data that reflect our preconceived ideas. And when we do publish the data, we too frequently publish only those that support these ideas. This cherry-picking is bad practice and should stop.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
There's an excellent Nature article on open data (separate thread started here) with this quote, that might strike a chord:
Too often, we scientists seek patterns in data that reflect our preconceived ideas. And when we do publish the data, we too frequently publish only those that support these ideas. This cherry-picking is bad practice and should stop.​

Talking of which, with respect to the 40% or so claimed to have reached a 'normal' physical function score, you might expect, a priori, that this would be reflected by a normal performance on the only objective measure - the 6MWT - i.e. managing around 600 metres distance.

The mean scores might conceal a much greater improvement for this group and personally I would have presented a seperate analysis to highlight the point if this were the case.

I don't recall seeing such an analysis?
 

biophile

Places I'd rather be.
Messages
8,977
There's an excellent Nature article on open data (separate thread started here) with this quote, that might strike a chord: "Too often, we scientists seek patterns in data that reflect our preconceived ideas. And when we do publish the data, we too frequently publish only those that support these ideas. This cherry-picking is bad practice and should stop."

Didn't PACE claim to have chosen all the outcomes before examining the data and omitting the rest in the first publication?

Changes to the original published protocol were made to improve either recruitment or interpretability, such as changing the proposed composite primary outcomes to single continuous scores. The analysis was guided by a Statistical Analysis Strategy (which we intend to publish), which was completed before analysis of outcome data, and which was much more detailed than the plan in the protocol; this is now conventional in the conduct of clinical trials. The eight secondary outcomes presented in our paper were selected for clinical relevance. All these decisions and plans were approved by the Trial Steering Committee, were fully reported in our paper, and were made before examining outcome data to avoid outcome reporting bias.

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60651-X/fulltext

This makes it sound like the 8 secondary outcomes were chosen for initial publication "before examining outcome data to avoid outcome reporting bias", but I'm not 100% clear that is the case and somehow it even seems unlikely. How did they decide that employment outcomes were less important than everything else mentioned in the initial paper? So the (subjective) "work and social adjustment scale" was decided to be more relevant than the actual employment data? If all the remaining outcomes published in a hypothetical future end up showing less than the ones already published / allegedly chosen a priori, that smell of fish wafting around might actually be fish.

IIRC, 16 months later the "Statistical Analysis Strategy" hasn't been published as promised and FOI requests to determine what was discussed in the "Trial Steering Committee" have all failed?
 

user9876

Senior Member
Messages
4,556
IIRC, 16 months later the "Statistical Analysis Strategy" hasn't been published as promised and FOI requests to determine what was discussed in the "Trial Steering Committee" have all failed?

Has there been an FoI request for the statistical analysis strategy?

Too me their complete lack of openness around the results and how they processed them, coupled with their definition of 'normal' suggests that the results they had didn't say what they wanted hence they were scrabling around for something positive to say.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
NICE will be reviewing their advice on ME/CFS in August 2013. Call me suspicious if you like, but what are the odds that the second PACE analysis won't come out much before then, with insufficient time to respond?

You don't have to call me "suspicious if you like": I will accept "suspicious" instead.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
NICE will be reviewing their advice on ME/CFS in August 2013. Call me suspicious if you like, but what are the odds that the second PACE analysis won't come out much before then, with insufficient time to respond?

You don't have to call me "suspicious if you like": I will accept "suspicious" instead.

Do you know if that means that they start their review in August 2013, or finish their review August 2013?

what kind of opportunity do we have to submit information to NICE?

I think and hope that the process should be open to all evidence submissions.
 
Back