• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A new phrase for 2012: "pulling a PACE"!

biophile

Places I'd rather be.
Messages
8,977
In a nutshell: the phrase "pull/ed/ing a PACE" could be used whenever spin or subtle deception is implicated (but not necessarily outright fraud or criminal behaviour).

Attention has been drawn recently to research misconduct in the UK (http://www.bmj.com/content/344/bmj.e14). It is my understanding that for many people within the ME and CFS communities, the subject of questionable research practices reminds them of the flawed psychological-behavioural CFS studies and a recent example would be the PACE Trial which was arguably "too big to fail". Flawed research is not necessarily the same as "misconduct" per se, and PACE did not appear to commit fraud in the sense of fabricating data. However, perhaps there is a grey area between spin and questionable practices, and a range of methodological concerns with suspicious context have also been raised about the PACE Trial.

This post isn't intended to be a proper summary of such problems with the PACE Trial, but I will put some related information in an additional post.

Technically speaking, "pull/ed/ing a PACE" would be to change goalposts/definitions and spin doctor a disappointing situation or mediocre result in such a way that people end up praising it at face value and dismissing critics out of hand. More generally it is for use whenever spin and/or deception is implicated but not necessarily outright fraud or criminal behaviour.

I guess it could be similar to "pull a swift one" as PACE was even fast tracked in the Lancet. Guilt by association is a common tactic of the biopsychosocialists and similar minded journalists when describing the characteristics of CFS patients, so therefore even though I do not personally believe PACE are guilty of outright fraud in the traditional sense, as a critic I'm certainly not losing sleep over seeing PACE being associated with fraud (what goes around comes around?). Apparently "pulling a PACE" isn't that uncommon in UK research then, eh Fiona Godlee of the BMJ editorial on the subject? :)

And in coverage of that editorial (www.ft.com/cms/s/2/bc6f7204-3d1f-11e1-8129-00144feabdc0.html) there are some amusing statements:

"Journal editors were often the first to come across cases of misconduct, when they spotted inconsistencies in scientific or medical papers ..."

Yes, that's why you need a senior editor like Richard Horton on your side!

"Unlike some other countries, the UK has no official national body to deal with research misconduct."

Hmmm, that may explain a lot about CFS research in the UK!

How do other people feel about this? What degree of questionable behaviour are PACE guilty of? Do you have more good examples of anyone "pulling a PACE" (doesn't have to be ME or CFS related)?

The failed FINE Trial provides more inspiration. The paper itself and accompanying editorial almost gives the impression that FINE did OK and would have been better if the patients just received more attention from better qualified therapists and that more research is needed. The phrase "pull/ed/ing a FINE" would mean pretending that you didn't just fail bigtime. Changing trial methodology (bimodal to Likert) to squeeze out a result also comes to mind. It was a failure and we don't need more research or spin or excuses to cover up this failure, it is time for these people to move on and make room for different research directions which have been suffocated under the cognitive behavioural paradigm of CFS.
 

biophile

Places I'd rather be.
Messages
8,977
PACE "pulling a PACE" was a colossal event worthy for the history books

Numerous objections were raised years ago by different people in response to the PACE protocol. Hooper wrote a large document containing some pre-publication objections and was the first to raise formal post-publication objections, and afterwards wrote smaller additions and had correspondence with the PACE authors. The IACFSME and almost all UK patient organizations took issue with the results or conclusions or methodology or implications. 8 letters to the editor were published and many more were unpublished. Others like Angela Kennedy were attempting to engage with the editors and ombudsman. A number of Phoenix Rising members have discussed the PACE Trial on a thread that has grown rather large.

The document "Methodological Inconsistencies in the PACE trial for ME/CFS", by Tate Mitchell, provides a concise summary of the main methodological problems (http://www.mediafire.com/file/58xlwu12afj903x/PACE trial critique Dec. 02, 2011.docx). Tom Kindlon also covered issues of safety in his paper title "Reporting of Harms Associated with [GET] and [CBT] in [ME/CFS]" (http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501).

From my own investigations, at best the PACE Trial was about half as effective as the authors expected. PACE were involved in, for example:

* Major goalpost shifting halfway through the trial which coincides with authors of the sister trial FINE experimenting with their own data (bimodal to Likert etc) to squeeze out a positive result after publication of FINE but before publication of PACE. All post-hoc changes in goalposts were much wider/weaker than the original trial protocol and led to the ridiculous situation where participants who were classified as having abnormal fatigue with significant disability at the beginning of the trial could remain unimproved or even slightly worse and then be classified at the end of the trial as "normal" and therefore a success story.

* Using inappropriate populations to derive the thresholds of "normal", then proudly presenting at the press conference the proportion of patients "getting back to normal" without explaining this is not the same as recovery (coverage of the PACE Trial in medical journals and newspapers unsurprisingly went on to confuse the two).

* Dropping the use of actigraphy which coincides with actigraphical data from previous trials that discredits the increased activity pillar of CBT and would have embarrassed the PACE authors pet approach if used as originally planned when applying for funding.

* Defining adverse effects in such a way that one could experience substantially limiting PENE for several weeks due to GET but for this to be deemed safe, then combining this with not requiring GET participants to actually increase activity if they didn't want to but then giving no data on activity changes and still implying that increasing activity is "safe".

* Potentially strawmanning the rival therapy of pacing.

* Using a ME criteria that no one else uses, while claiming that the Canadian criteria would have been "impossible" to use (other researchers don't seem to have this problem).

Despite Richard Horton of the Lancet claiming that the authors were "utterly impartial", the authors were using several million pounds in tax payer dollars to test their own pet therapies against a rival therapy. Their reputations were at stake after investing their careers in the cognitive behavioural model of CFS for decades. The authors also declared potential conflicts of interest in the form of paid and voluntary work for insurance companies. Don't forget that principle authors/investigators of PACE resigned from the 2002 UK Chief Medical Officer's CFS/ME Report because of disagreements over the inclusion of pacing and for not placing enough emphasis on the psychiatric aspects. Yet ironically patients were dismissed as the ones with a conflict of interest, but all we mainly want is our health back!

Despite Horton claiming that the paper went through "endless rounds of peer review", errors and flaws managed to get through. The Lancet are guilty of allowing several factual errors in the paper and commentary to remain uncorrected nearly one year later. Makes me wonder how many other errors go routinely uncorrected in the Lancet?

The main smoking gun of actual "deception" to me is their dubious post-hoc analyses for "normal" which are not acceptable and should be retracted with an apology. In itself that single issue doesn't radically change the conclusions of the paper, but it suggests the question should be asked "when does extensive/complex spin doctoring become research misconduct?" In what reality is 60/100 points in physical function "normal" for healthy 40 year olds when 84% of the latter score over 80/100 points?

Despite being an unblinded uncontrolled trial without adequate objective measures to counteract response bias (and the closest thing to it, the 6MWD, does not support the hype), the Science Media Centre (UK) press release praised the trial for being "robust" and the highest quality of clinical evidence. The PACE authors and the Lancet dismissed all criticisms. News coverage of the PACE Trial portrayed critics as irrational extremists and associated them with criminal activities.

Meanwhile it is still OK for CBT/GET proponents to misrepresent such research in their own papers, for recent examples which just scratch the surface, see Collins et al (inc. Crawley) and Burgess et al (inc. Chalder of PACE) and Cella & Sharpe & Chalder (again, what a surprise!) giving misleading statements about the success of CBT/GET, discussed in posts #11+ onwards in this thread: http://forums.phoenixrising.me/show...sus-Telephone-Treatment-A-RCT-(Burgess-et-al)) And by misleading I mean the citations simply don't support the claims being made. I can no longer trust these people at face value or give them the benefit of the doubt.
 

Dolphin

Senior Member
Messages
17,567
Numerous objections were raised years ago by different people in response to the PACE protocol. Hooper wrote a large document containing some pre-publication objections and was the first to raise formal post-publication objections, and afterwards wrote smaller additions and had correspondence with the PACE authors. The IACFSME and almost all UK patient organizations took issue with the results or conclusions or methodology or implications. 8 letters to the editor were published and many more were unpublished. Others like Angela Kennedy were attempting to engage with the editors and ombudsman. A number of Phoenix Rising members have discussed the PACE Trial on a thread that has grown rather large.

The document "Methodological Inconsistencies in the PACE trial for ME/CFS", by Tate Mitchell, provides a concise summary of the main methodological problems (http://www.mediafire.com/file/58xlwu12afj903x/PACE trial critique Dec. 02, 2011.docx). Tom Kindlon also covered issues of safety in his paper title "Reporting of Harms Associated with [GET] and [CBT] in [ME/CFS]" (http://www.iacfsme.org/LinkClick.aspx?fileticket=Rd2tIJ0oHqk=&tabid=501).

From my own investigations, at best the PACE Trial was about half as effective as the authors expected. PACE were involved in, for example:

* Major goalpost shifting halfway through the trial which coincides with authors of the sister trial FINE experimenting with their own data (bimodal to Likert etc) to squeeze out a positive result after publication of FINE but before publication of PACE. All post-hoc changes in goalposts were much wider/weaker than the original trial protocol and led to the ridiculous situation where participants who were classified as having abnormal fatigue with significant disability at the beginning of the trial could remain unimproved or even slightly worse and then be classified at the end of the trial as "normal" and therefore a success story.

* Using inappropriate populations to derive the thresholds of "normal", then proudly presenting at the press conference the proportion of patients "getting back to normal" without explaining this is not the same as recovery (coverage of the PACE Trial in medical journals and newspapers unsurprisingly went on to confuse the two).

* Dropping the use of actigraphy which coincides with actigraphical data from previous trials that discredits the increased activity pillar of CBT and would have embarrassed the PACE authors pet approach if used as originally planned when applying for funding.

* Defining adverse effects in such a way that one could experience substantially limiting PENE for several weeks due to GET but for this to be deemed safe, then combining this with not requiring GET participants to actually increase activity if they didn't want to but then giving no data on activity changes and still implying that increasing activity is "safe".

* Potentially strawmanning the rival therapy of pacing.

* Using a ME criteria that no one else uses, while claiming that the Canadian criteria would have been "impossible" to use (other researchers don't seem to have this problem).

Despite Richard Horton of the Lancet claiming that the authors were "utterly impartial", the authors were using several million pounds in tax payer dollars to test their own pet therapies against a rival therapy. Their reputations were at stake after investing their careers in the cognitive behavioural model of CFS for decades. The authors also declared potential conflicts of interest in the form of paid and voluntary work for insurance companies. Don't forget that principle authors/investigators of PACE resigned from the 2002 UK Chief Medical Officer's CFS/ME Report because of disagreements over the inclusion of pacing and for not placing enough emphasis on the psychiatric aspects. Yet ironically patients were dismissed as the ones with a conflict of interest, but all we mainly want is our health back!

Despite Horton claiming that the paper went through "endless rounds of peer review", errors and flaws managed to get through. The Lancet are guilty of allowing several factual errors in the paper and commentary to remain uncorrected nearly one year later. Makes me wonder how many other errors go routinely uncorrected in the Lancet?

The main smoking gun of actual "deception" to me is their dubious post-hoc analyses for "normal" which are not acceptable and should be retracted with an apology. In itself that single issue doesn't radically change the conclusions of the paper, but it suggests the question should be asked "when does extensive/complex spin doctoring become research misconduct?" In what reality is 60/100 points in physical function "normal" for healthy 40 year olds when 84% of the latter score over 80/100 points?

Despite being an unblinded uncontrolled trial without adequate objective measures to counteract response bias (and the closest thing to it, the 6MWD, does not support the hype), the Science Media Centre (UK) press release praised the trial for being "robust" and the highest quality of clinical evidence. The PACE authors and the Lancet dismissed all criticisms. News coverage of the PACE Trial portrayed critics as irrational extremists and associated them with criminal activities.

Meanwhile it is still OK for CBT/GET proponents to misrepresent such research in their own papers, for recent examples which just scratch the surface, see Collins et al (inc. Crawley) and Burgess et al (inc. Chalder of PACE) and Cella & Sharpe & Chalder (again, what a surprise!) giving misleading statements about the success of CBT/GET, discussed in posts #11+ onwards in this thread: http://forums.phoenixrising.me/show...sus-Telephone-Treatment-A-RCT-(Burgess-et-al)) And by misleading I mean the citations simply don't support the claims being made. I can no longer trust these people at face value or give them the benefit of the doubt.
Well done for this summary of the many criticisms that have been raised about the PACE Trial. :thumbsup:
 

drjohn

Senior Member
Messages
169
One example of why "pacing" may not be recommended for M.E. sufferers, eh? Well done to all those who are drawing attention to the serious flaws in this work.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi, first the PACE trial did not test CBT/GET against pacing: I am sure that everyone here is aware of it, but I want to make it explicit in every conversation in case some catches this from a Google search. They tested against adaptive pacing which is so far removed from pacing that it is almost a complete opposite.

Second, I concur that no laws were broken that I can see, and that no overt fraud occured. However, sometimes deceptive practices can count as fraud if I am not mistaken, In addition, this is not some random research. This is medical research. Medical standards and ethics still apply. Prior approval from an ethics committee does not absolve onself from ethical violations after the fact. Now I currently have no clear evidence of either of these violations, but I am compiling and analyzing data. If I find evidence of either I will be making a formal complaint to the medical authorities and perhaps the UN.

Bye, Alex

PS Just to be clear, I am not currently analyzing PACE this way. I expect it will take me six months or longer to assemble the criteria for such an analysis alone, and longer to do the analysis. The construction of those criteria, using already established medical standards, has begun however. They need to be collated, adapted and converted to a useful form for this purpose.
 

Esther12

Senior Member
Messages
13,774
The spin around PACE has changed the way I see these researchers too. I previously saw them as often incompetent, prejudiced and dangerous, but still largely well meaning... the recent spin and misrepresentation of results cannot just be accidents and error. It's almost certainly an intentional attempt to manipulate the way these therapies are seen, and anyone with an interest in honest science and debate should be speaking out against it. I think that they would be were it not 'CFS', and the researchers are able to hide behind claims of stigmatising mental health issues, or the complicated way in which mind and body interact, and so on and so on.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I think people are speaking out about it, or things like it. Its an eye opener to realize much of the counter-psychobabble debate is not ME related, so we miss it. We need to cast a wider net to pick up the criticism.
 

user9876

Senior Member
Messages
4,556
thoughts on the use of the chandler fatigue scale

I'm surprised that no one is talking about the use of the chandler fatigue scale as a flaw in the PACE trial. A number of issues with the scale as a comparison tool occur to me although I could be very wrong as there doesn't seem to be a definative version so I may have just seen a number of bad versions.


Several of the questions are relative to something - that is do you need more/less rest than normal or do you feel more/less tired than usual. Hence aren't the results quite subjective to what people consider usual. That is a healthy person may have more problems with tiredness than usual but this may just be because they are doing more. Equally someone with severe fatigue may have less problems than usual but still be very severly affected (since usual is highly fatigued). Doesn't the way the scale is worded make it useless for comparisons between different people. Perhaps it is a useful tool for assessing an individual and their changes? Maybe the people answering the questions are given a better brief of normality but is this the same across the trial participants and those used to measure 'normality' and say someone is recovered.

The chandler scale is not a likert scale (which should be symetrical around positive and negative) it is not even a forced likert scale as you would expect when using a 4 point scale. But it has more gradation on more than less ( 'less than usual' 'no more than usual', 'more than usual' and 'much more than usual'). I'm not a psychologist but persumably there is good reason to use the symmetric scale (and this seems to be followed by most psychologists). Using the 4 point scale creates a nonlineararity in the scoring for each question making comparisons difficult.

As each of the (non)likert items are summed (i.e. the results of each question) there is not a linear scale since the questions are not independant (shown by the principle component analysis in the original paper). This means that a slight difference say in fatigue may lead to a different answer in a number of questions and hence quite a difference in the score. The lack of balance between fatique and cognitive questions also means that a change in your concentration and mental abilities will not move the scale as much as a change in general fatigue.

The PACE write up quotes means but how meaningful is this when the scale is quite non linear and when the scores are very subjective to the individual. The use of means is also very dodgy when you have a hetrogenious set of patients hence possibly multi-model distributions (which could explain the high varience in the results).

The write up should have discussed the use of a nonstandard scoring methodology particularly as it moved to a likert scoring rather than a simple better/worse scale. There seem to be a number of potential impacts that should have been explored. Even if the scale was symmetric the use as a metric when the questions are highly correlated needs a good deal of discussion lacking in the paper.
 

user9876

Senior Member
Messages
4,556
I previously saw them as often incompetent, prejudiced and dangerous, but still largely well meaning... t

What you are missing is that they are interested in advancing their careers to do this they need to be seen to be sucessful (hence hyping results whatever they are). To advance they also need to be part of a community hence giving kind reviews to others within the group but often being much harsher to outsiders (particularly those who have different styles). This is a general observation about research communities not one specificly for ME/CFS researchers.
 

Esther12

Senior Member
Messages
13,774
What you are missing is that they are interested in advancing their careers to do this they need to be seen to be sucessful (hence hyping results whatever they are). To advance they also need to be part of a community hence giving kind reviews to others within the group but often being much harsher to outsiders (particularly those who have different styles). This is a general observation about research communities not one specificly for ME/CFS researchers.

I agree, and I recently annotated Wessely's 'personal story', which included this:

As his list of his own achievements was rather lacking, I thought Id do an alternative top 3 of real personal achievements for him:

1) He built up his own career. I think that this was his primary aim, and by his own standards, I think he succeeded. He made a name of himself by producing papers that appealed to the prejudices of British medical society about a group of unpopular patients, and provided a quasi-scientific justification for those who wanted to just tell CFS patients they were being too sensitive and should go on anti-depressants. No wonder he was popular amongst his colleagues. I'm not saying that he intentionally and cynically set out to do this, but it is what occurred. He also now gets to spout guff about CFS being a metaphor for our times and what not.... I get the impression that he loves being that sort of academic: You may not appreciate such things, but the PACE trial was a particularly elegant piece of work - a cheeky coquette of an RCT

Simon Wessely blog post taken from this Simon Wessely thread.

It has been sad to see how influenced academia and science are by a desire for personal admiration, respect and now 'impact'. When I was younger I'd previously assumed that there was much more of a commitment to an honest pursuit of truth, and intellectual development for it's own sake - why else go in to academia? Now I see them as little more than matters which happen to coincide with other interests in certain circumstances.
 

Dolphin

Senior Member
Messages
17,567
I'm surprised that no one is talking about the use of the chandler fatigue scale as a flaw in the PACE trial. A number of issues with the scale as a comparison tool occur to me although I could be very wrong as there doesn't seem to be a definative version so I may have just seen a number of bad versions.


Several of the questions are relative to something - that is do you need more/less rest than normal or do you feel more/less tired than usual. Hence aren't the results quite subjective to what people consider usual. That is a healthy person may have more problems with tiredness than usual but this may just be because they are doing more. Equally someone with severe fatigue may have less problems than usual but still be very severly affected (since usual is highly fatigued). Doesn't the way the scale is worded make it useless for comparisons between different people. Perhaps it is a useful tool for assessing an individual and their changes? Maybe the people answering the questions are given a better brief of normality but is this the same across the trial participants and those used to measure 'normality' and say someone is recovered.

The chandler scale is not a likert scale (which should be symetrical around positive and negative) it is not even a forced likert scale as you would expect when using a 4 point scale. But it has more gradation on more than less ( 'less than usual' 'no more than usual', 'more than usual' and 'much more than usual'). I'm not a psychologist but persumably there is good reason to use the symmetric scale (and this seems to be followed by most psychologists). Using the 4 point scale creates a nonlineararity in the scoring for each question making comparisons difficult.

As each of the (non)likert items are summed (i.e. the results of each question) there is not a linear scale since the questions are not independant (shown by the principle component analysis in the original paper). This means that a slight difference say in fatigue may lead to a different answer in a number of questions and hence quite a difference in the score. The lack of balance between fatique and cognitive questions also means that a change in your concentration and mental abilities will not move the scale as much as a change in general fatigue.

The PACE write up quotes means but how meaningful is this when the scale is quite non linear and when the scores are very subjective to the individual. The use of means is also very dodgy when you have a hetrogenious set of patients hence possibly multi-model distributions (which could explain the high varience in the results).

The write up should have discussed the use of a nonstandard scoring methodology particularly as it moved to a likert scoring rather than a simple better/worse scale. There seem to be a number of potential impacts that should have been explored. Even if the scale was symmetric the use as a metric when the questions are highly correlated needs a good deal of discussion lacking in the paper.
I agree the Likert scale issue is important.

It has been discussed on this site before e.g. in this very long thread: http://forums.phoenixrising.me/showthread.php?4926-PACE-Trial-and-PACE-Trial-Protocol . I can't remember who said what but I think there's a reasonable chance biophile, as a regular contributor to the thread, has discussed the Chalder fatigue scale issue there. Maybe you could re-post your message there and it will be discussed.

If one downloads the file at: http://www.mediafire.com/?w9whdn7hh112b7y , one can see the exact wording of the Chalder Fatigue Scale used: it's on Appendix 6.8, Page 162 of 226.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
It has been sad to see how influenced academia and science are by a desire for personal admiration, respect and now 'impact'. When I was younger I'd previously assumed that there was much more of a commitment to an honest pursuit of truth, and intellectual development for it's own sake - why else go in to academia? Now I see them as little more than matters which happen to coincide with other interests in certain circumstances.

Or as I put it elsewhere, its often all about getting doggy treats. Bye, Alex
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
nice to meet you, user9876. :Retro smile: Thanks for joining the conversation here!
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
Were the authors not going to publish something - an addendum? - on 'recovery'? I seem to have lost the plot with PACE and might have missed it but am pretty sure they promised to follow this up - you know to explain what was meant and how 'recovery' was defined or something...?
 

Dolphin

Senior Member
Messages
17,567
Were the authors not going to publish something - an addendum? - on 'recovery'? I seem to have lost the plot with PACE and might have missed it but am pretty sure they promised to follow this up - you know to explain what was meant and how 'recovery' was defined or something...?
Yes, I'm pretty sure they said they would look at various definitions of recovery/similar in a paper. I think they have said a few more papers will come out.