• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A critical commentary and preliminary re-analysis of the PACE trial

Also, if I understood this better, then maybe I would also see where the "13% of participants qualified as recovered on this revised criterion before the trial even began" fits into this graph.
That would be the number of people who entered the trial on a SF-36 score of between 60, the recovery threshold, and 65, the entry threshold.

And I believe Hutan has it right, the y-axis is percentage of working age general population.
 

Tom Kindlon

Senior Member
Messages
1,734

RogerBlack

Senior Member
Messages
902
Yes, and our paper discusses the problem of relying on self-reports in an unblinded trial. We point out that in these cricumstances the authors should have paid more attention to objctive measures of function, such as walking distance, physical fitness and sickness benefit (were there were no gains, apart from a small one in GET walking).

Thanks go to everyone, centrally and peripherally involved in bringing this to fruition.

'where there were no gains'.
I recall comments about hints in the data that participants that had withdrawn may have had their conditions worsened.
Is it safe to assume that nothing meaningful can be said about this due to limitations of the released data?
Was there any indication that they kept data on withdrawl reasons?
 
Messages
2,391
Location
UK
And I believe Hutan has it right, the y-axis is percentage of working age general population.
Sounds right, based on the graph's explanatory notes (which I should have digested in the first place):-
upload_2016-12-14_23-54-9.png
 
Last edited:
Messages
2,391
Location
UK
This paper represents a collaborative effort between psychology researchers and patients
I think it is a massive plus that the lead author is a psychology researcher. It clarifies big-time that good-science psychology researchers do exist, and that they also agree PACE is a dog pile. Moreover it gives the bad-science researchers reason to consider they are not immune from critical peer review from within their own discipline. And it shows the ME/CFS community actively embraces all good-science researchers, of any discipline.
 
Messages
2,391
Location
UK
This publication really brings home to me what I have been reading lots about since joining PR, and has been slowly filtering through my grey matter. I fully appreciate that a trial such as PACE is impossible to do blind; it has to be unblinded. It would take an awful lot of brain fog to not have some clue what sort of treatment you were getting, be you patient or researcher. Fine so far. But the question is what do you do about that? The PACE trial authors had a simple solution - ignore it! (In fact ignoring questions they do not like seems to be the standard strategy). Indeed they seemed to exploit it, with what in effect were some fairly heavy sales pitches.

I am no expert, but from what I read, and based on common sense, you need to reference these "floating" subjective readings against some unambiguous objective measurements; without absolutes to reference against, the subjective values can mean more or less what you want them to mean.

It is a bit like a car whose electrics have a bad "earth" (quotes because it is not really earth, but always known as that). All the voltages are referenced off of the car's earth, either directly or indirectly, and all works well if the earth (the reference datum) is good. But if the earth is bad, then the voltages can float all over the place, and the various circuits start to see all the wrong voltages to what they are designed to expect. Basically, if you do not have good reference points then everything else is suspect.
 

Anne

Senior Member
Messages
295
Yes, "Fatigue..." is now connected to PubMed though not every article gets on there.

Who (or what if it's an algorithm) decides which articles will get on PubMed?

When will we know if this one does, do you know?

(sorry for not knowing these things...)
 

Denise

Senior Member
Messages
1,095
Who (or what if it's an algorithm) decides which articles will get on PubMed?

When will we know if this one does, do you know?

(sorry for not knowing these things...)

@Anne - I am sorry, I don't know how or why certain articles get onto PubMed.
It has seemed to me that there is sometimes a lag between publication and when an article appears on PubMed but this is only what I believe I have observed.
I hope someone else here on PR might know the answers.
 

JaimeS

Senior Member
Messages
3,408
Location
Silicon Valley, CA
Messages
3,263
It's important to keep in mind that this re-analysis only corrects one layer of bias. Underneath this there is another layer of bias which is related to the lack of blinding, an inadequate control group and reliance on subjective measures. One cannot correct this flaw but only point out that even the meager results we're seeing are exaggerated.

This is an important aspect because these things by themselves are enough to produce highly misleading results.
Yes, I totally agree, @A.B. These recovery definition errors are only the tip of the iceberg. Even if you corrected all of the problems described in this paper, there would still be huge, even deeper flaws in the data. I see there's a brief mention of the problem of using subjective endpoints in a non-blinded study, but they don't go into too much detail on this.
 
Messages
3,263
This is one of the comments made by Ellen Goudsmit that @AndyPR was angry about.

She judges the quality of a paper with one only criterion: "has it been done by me?" Then this is good work. If not, it can only be the result of these unknowledgeable patients who understand nothing.
With friends like her, no need to have ennemies...
Indeed. Reading those comments was almost like listening to the PACE authors themselves. The underlying narrative was that "patients" should pull their heads in and only "researchers" or "psychologists" (like the commenter) have the appropriate qualifications to speak. Even though this paper was headed by a psychology researcher, this is played down - the word "patients" is used at every opportunity when describing the authors. Perhaps even patients being involved in a collaborative project makes it problematic in Goudsmit's eyes? It should be "us" versus "them"?

We don't need this kind of snobbery coming from our so-called allies any more than we need it from the BPS crowd.
 

taniaaust1

Senior Member
Messages
13,054
Location
Sth Australia
Full text at http://www.tandfonline.com.sci-hub.cc/doi/abs/10.1080/21641846.2017.1259724

I haven't read it closely yet, but it seems to be laying out the major flaws with the PACE "Recovery" paper (2013). So there's a lot of material which is familiar to us. I don't think it talks about the initial 2011 paper much, which is the one that covers improvement instead of recovery.

But it's very good to see a discussion of the PACE flaws published, since doctors, therapists, and politicians who can't understand or evaluate research papers themselves will need to hear it from a journal. Hopefully this will be useful to show to doctors, and be considered by evidence review panels, such as NICE. And it makes a nice rebuttal to BPS quacks raving about how great PACE is :rolleyes:

There's a couple graphs that illustrate the questionnaire threshold problems very well:

View attachment 18701

View attachment 18702

This was clearly fraud done. Can nothing legally be done about this? Can people keep doing false studies which lead to our patient group getting harmed and keep getting away with this? They should at the very least be forced to publically appologise for purposely putting out misleading research and to just think public research money got used for this

They should be banned from ever doing any more research OF ANY KIND
 
Last edited:

RogerBlack

Senior Member
Messages
902
This was clearly fraud done. Can nothing legally be done about this? Can people keep doing false studies which lead to our patient group getting harmed and keep getting away with this? They should at the very least be forced to publically appologise for purposely putting out misleading research and to just think public research money got used for this
It doesn't really work that way alas.
Absent a smoking gun, the absolute worst that is likely to happen is a retraction of the PACE papers.
(which would of course be great).
Note, for example, Wakefield has not been prosecuted in any other way than losing his medical licence, though his research was indirectly responsible for perhaps 10000 deaths. http://antivaccinebodycount.com/

In Wakefields case, which lead to him losing his medical licence, he did many, many things wrong.
Misused various funding, and lied to various funding bodies.
Did not reveal that he was involved in court action against MMR when instigating the trials.
Did not get ethics committee approval for all the trial patients.
Actively lied in several respects to many organisations about how the patients were selected.
...

His conduct on the paper alone in the lancet was considerably worse than White et al.
Unless someone reveals a smoking gun (say an email 'I hate people who claim to have CFS and am intentionally lying in this paper'), more than a retraction isn't happening.

Unfortunately.

There is a gap between what would be unambiguous fraud - if for example there was evidence that they'd altered the data, and what can be brushed off as 'Our analysis was poorly chosen'.

Fraud requires intent to mislead.
I unfortunately do not doubt that White et al believe the initial paper to be true.
 
Messages
2,158
Fraud requires intent to mislead.
I unfortunately do not doubt that White et al believe the initial paper to be true.

You're probably right, unfortunately, that the PACE people will claim they believed their results to be 'true'. I'm not at all convinced of this. I think they were influenced by their greed (financial and career) and their prejudice against pwme.

Wessely has been known to use the metaphor of setting a ship off from one port and having to adjust its route along the way in order to arrive at the desired destination, unwittingly (witlessly) pointing out exactly the problem with PACE.

See, for example these blog pieces by a US academic lawyer, Steve Lubet:

http://www.thefacultylounge.org/2016/11/the-pace-study-open-access-and-conflicts-of-interest.html

http://www.thefacultylounge.org/201...simon-wessely-defender-of-the-pace-study.html

To quote the latter in a comment addressed to Wessely:

'Finally, you point to your own blog post, which ironically undermines your very point. You compare the PACE Trial to an ocean liner plotting a course from Southampton to New York, and express satisfaction that it made the trip “successfully across the Atlantic,” despite course corrections along the way.

But surely you realize that a randomized controlled study is not supposed to have a fixed destination, but rather should follow wherever the evidence – or the current, to maintain the metaphor -- leads. You thus virtually admit that the PACE Trial was always intended to reach a particular result, and that adjustments along the way were necessary to get it there. Just so.'
 
Messages
91
I have just read the comments by Ellen Goudsmit about this article on Facebook. The strange thing is she doesn't say anything of substance, there is no real criticism of the paper either even though there are things you can criticize just like in any other paper; Ellen just moans and whinges. To me it all comes down to one word: jealousy. The question one should ask is the following; the original PACE paper was published at the beginning of 2011 with more delightful "honest" work published since. Why hasn't Ellen Goudsmit published a review of the PACE trial since ?? (Yes I know she published a few comments but that's something totally different).
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
I have just read the comments by Ellen Goudsmit about this article on Facebook. The strange thing is she doesn't say anything of substance, there is no real criticism of the paper either even though there are things you can criticize just like in any other paper; Ellen just moans and whinges. To me it all comes down to one word: jealousy. The question one should ask is the following; the original PACE paper was published at the beginning of 2011 with more delightful "honest" work published since. Why hasn't Ellen Goudsmit published a review of the PACE trial since ?? (Yes I know she published a few comments but that's something totally different).

The same reason why no one else published a review of the PACE trial beyond various commentaries until now - no one had access to the data for re-analysis (despite many people asking for it).