• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Well, obviously not clinically useful on average, but as you mentioned elsewhere it technically had a small effect on average.

Yes, CBT was found to have a small effect size, but was 'clinically ineffective', based on the threshold of the 'clinical useful difference'.
I'm equating not meeting the threshold for a 'clinical useful difference' with it being 'clinically ineffective', which I'm pretty sure is justifiable.
I'm pretty sure that a 'clinical useful difference' equates to a 'clinical significant difference', and if a therapy does not have a 'significant' effect size, then it is justifiable to say it is ineffective.
The study was set up to see if CBT and GET were clinically effective, and CBT failed to prove to be clinically effective for SF-36 PF.

I'm not quite sure how to reconcile the study's finding that CBT was ineffective with the study's finding that 13% responded to CBT, except to say that on 'average' CBT was clinically ineffective. But it isn't necessary to include the word 'average' when stating what the effect size is.
 

Dolphin

Senior Member
Messages
17,567
This was posted to Co-Cure in the last 24 hours https://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind1207b&L=co-cure&F=&S=&P=18411. Thanks to Ian for doing it. He previously used the FOI to get some interesting information about grant applications the (UK) Medical Research Council had refused over the years.

I've divided it into four separate bits.

------------


On 05/04/12, the requestor asked to be provided with copies of minutes from all meetings of the PACE Trial Steering Committee, Trial Management Group, Data Monitoring and Ethics committee pursuant to the UK FOIA 2000. This initial request was denied pursuant to the opinion of the relevant qualified person that exemption under section 36(2) of the Act applied due to the likely prejudice to the effective conduct of public affairs. The requestor has now requested a further internal review. As under section 36(2), only the qualified person can render a reasonable opinion, the qualified person has here taken the opportunity to re‐examine that decision in light of the comments of the requestor and the analysis of qualified staff who have further examined the minutes in question and other documents and issues in light of the request for this internal review. The following comprises the opinion on internal review of the qualified person.

Faculty members including scientific researchers often share their thoughts and views with one another. This is especially true where the scientific examination of an issue is a collaboration among scientific researchers such as with the examination of treatment outcomes in the PACE clinical trials. It is further true that in the instant case the requested minutes reflect the opinions/exchanges of the principal investigators and other members of the research team on a range of issues regarding the structure, proper conduct and ongoing evaluation of the trials. The confidentiality of such discussion and debate can be vital to the development of scholarship, knowledge, and scientific truth which is the public mission of this College. Faculty members and other researchers and individuals with whom they collaborate in these endeavours must be afforded privacy in their exchanges in order to pursue knowledge and develop lines of argument and scientific findings without fear of reprisal for findings or ideas that are controversial and without the premature disclosure of those ideas.

There is limited case law and decisions to guide an opinion in this context. However, that academic freedom and the need for the space to develop ideas and pursue knowledge protected from reprisal is a recognised and important public interest is reflected in a variety of important sources. For example, although not controlling here since this is a matter of purely UK law, it must be noted that academic freedom is a public interest and important democratic European Union value reflected equally (Article 13) with access to information (Article 11) in the EU Charter of Rights and Fundamental Freedoms. It is as well an important public interest embedded in UK law. The UK has distinctly indicated that academics’ freedom to develop ideas and knowledge free from jeopardy of reprisal is to be legally protected in the UK Education Reform Act 1988 (Section 43). While there appears to be limited decision as to the exact scope and nature of the academic freedom and scholarship protected pursuant to it, not to mention the balance of this public interest with others, a notable work is that of Eric Berendt, Academic Freedom and the Law (Hart Publishing 2010). Professor Berendt suggests that courts and others bodies should consider the Act’s protections, where relevant, in their decision making. The Information Commissioner appears to have independently reached a similar conclusion in the context of the UK FOIA 2000. He has as well note, in his recent guidance to Universities and Colleges, that academic freedom and the space to develop scholarship and ideas can be an appropriate public interest worthy of protection. Thus, in reviewing our original decision, it is necessary that we balance the respective public interests here of protecting academic freedom and the space to research and develop knowledge and ideas without fear of public reprisal and as part of the College’s effective conduct of its public affairs as a public institution of higher learning and research with the public interest in disclosing the requested documents.

On review, the qualified person continues to consider that prejudice to the conduct of public affairs as consequence for making public the minutes sought here would occur. This prejudice will be the loss of the most talented and experienced researchers to areas of study that are, as here, any way controversial or to institutions around the world that can guarantee them the privacy and confidentiality that is necessary in academia. This risk is real and in no way speculative in the instant case. Various independent sources, including a recent article in the British Medical Journal, reveal that the particular animus that surrounds the debate regarding the causes and treatment of ME/Chronic Fatigue Syndrome has created difficulty in obtaining scholars to present scientific papers on the subject at conferences and has caused experienced medical researchers who have been victims of the animus to leave the field to pursue other research areas. The further deterrent effect on scholarship is further noted in the BMJ article. If expert senior researchers have left the area of study and expressed concerns for their safety, reputations and future careers arising from the mere publication of research findings and papers, it is not remote or unreasonable to conclude that junior researchers working under them (and whose work and identities are also reflected in the minutes) would similarly be deterred from pursuing scientific research in this controversial field where continued study is important.

It is also reasonable to conclude that disclosure here would inhibit the quality and freedom of future exchanges among academic researchers who do continue in the field and to recruit important participants outside academe to get involved in the studies. A review of the minutes in question reveals sensitivity among the researchers in light of the highly politicised and polemic nature of elements of the public debate noted above. While the noted instances concerned decisions like ensuring that treatment protocol manuals were of equal quality and comprehensiveness for all treatments methods under study and not to utilise access to a particular service network which could add resources to the study in light of the possible perception by patient representatives that this would indicate a mental health bias, it is indicia of a high degree of self‐editing in light of the highly politicised environment. This self‐editing and refusal to use professional resources that might otherwise enhance the patient experience occurred in an environment where researchers fully expected the meetings to be closed to the public and the minutes to be confidential, evidence. These responses express strong views as to the negative impact on future exchanges and the willingness of some important participants to be involved, for example, patient representatives whose role is to help ensure a public oversight and balance of views and who would not participate if their identities or view/statements as reflected in the minutes were disclosed to the public.

As there are other studies planned and beginning, disclosure now of the identity/opinions of the participants in the completed study could likely impact participation and exchange of views and analysis in other studies. Since ME/CFS is an area where there is a significant need for ongoing research, the public interest in continuing to perform such studies in an atmosphere conducive to academic freedom is great with the potential prejudice to its quality and successful completion real and significant.

Turning, on balance, to the public interest in disclosure of the series of minutes of the two groups requested here. It is recognised there is a public interest in the disclosure of research that is publicly funded as here, to permit, among other things, the public to monitor the expenditure of public funds. It is also recognised that in the conduct of public affairs the public interest in providing a space to think or engage in debate freely to reach a decision that affects the public usually lessens when the decision has been made or the policy reached. There is an important public interest in the transparency/accountability of public authorities and the ability of the public to monitor activities of public bodies and understand how decisions were taken that affect them.

Here, however, the research and its findings have been fully and timely published in a respected peer‐reviewed journal, The Lancet, with access to the findings fully available to the public. Moreover, these findings have been subject to extraordinary public scrutiny. The Lancet, in response to extensive public commentary, in an unusual procedure, subjected the study to a further peer review process. While the requestor here suggests that the minutes would be helpful to provide the public information as to the findings in light of investigators’ conflict of interests, these interests were disclosed with the published study. It is not viewed that the minutes in question would provide further the public interest by providing more information in this regard to the public.

Also in the instant case, however, there is an ongoing scientific process, both with new studies, one of which is advised as to be just underway and another planned longitudinal evaluation of data from the study in question. There is, therefore, here a continuing need to protect the free and frank exchange of views in such ongoing studies and to promote the public interest in protecting academic freedom and the College’s future effective conduct of its public affairs mission to engage the effective conduct and evaluation of scientific knowledge here without fear of public reprisal.

For these reasons, in a further internal review, the qualified person concludes that his original opinion was reasonable and continues to be of the opinion that that in light of this the balance the public interest in not disclosing the minutes embodying the communications among principal investigators, other researchers and study participants outweighs the public interests favoring disclosure of these minutes. As it is reasonable to conclude here that the disclosure of the minutes sought would or would be likely to inhibit the free and frank exchange of views for deliberation as well as prejudice the effective conduct of public affairs, the qualified exemptions of Section 36(b) and (c) of the UK FOIA 2000 applies. The qualified person continues to be of the opinion that the request should be denied.




On 31/05/2012 2:38 PM, Ian wrote:
Dear Mr, Smallcombe,

Thank you for your email dated 14/05/2012. I wish to take up your offer to review the decision not to disclose. I feel public interest in favour of disclosure far outweighs that of withholding. I believe consideration needs to focus on how the meaning of free and frank exchange of views is being interpreted in this instance, and whether an inhibition in the way the act defines it, could even have arisen during such meetings, or likely to in the future if similar smaller studies were to be given further public funding. Section 36 in my view if used inappropriately can have the opposite outcome of its intended use and access needs to happen in certain circumstances to ensure a free and frank exchange of views is actually taking place.

The reason I am of this opinion is that despite the fact that ME is listed to G93.3 under Diseases of the Nervous System by the World Health Organisation, it is well known that a huge amount of controversy surrounds the illness and two sides within medicine (psychiatric v biomedical) have a long history of opposing one another as to the medical approach believed necessary to manage treat and cure the condition: psychiatry favours using a far less stringent criteria to identify and research the illness within the population, whilst those from a biomedical opinion on the whole favour a far more stringent criteria to the point that both are likely to be looking at different conditions. However, Psychiatry has consistently failed to prove its worth in this area. There is also enough circumstantial evidence to show that there is collusion between government and the Insurance industry [1] [2] in order to limit the financial burden ME has placed on both, and that psychiatry is being favoured over biomedical to enable them to achieve that. The current standoff and related research is clearly of public concern as it is thought to be causing an entrenchment of views within psychiatry, along with unwillingness to give ground and make way for other avenues of research. Whilst this situation continues it is unlikely that a fully informed public debate will ever be able to happen that would enable the situation to change and bring about an improvement to the lives of ME sufferers.

I feel it is reasonable therefore, to assume all involved with PACE who attended the meetings concerned, were of the same mindset. They were obviously aware that funding had been made available and that availability had raised a fair amount of criticism from within the ME community from those favouring a biomedical approach [2] [3] yet that criticism was largely ignored and the trial still went ahead, and with alterations granted that on the surface appear more to enable it to be able to do so, than any real regard for maintaining an acceptable level of a good scientific standard [4]. Added to that when a comparison is made, the concerns put forward by those critical of PACE, (not only at its outset, but during and after publication) [2] [3] [5] [6] [7] [8] do seem to have had excellent foresight as the published results are extremely poor.

I firmly believe the evidence relating to PACE, when viewed collectively and in context does highlight and support the need for transparency and openness to happen, and it be known likely to happen to allow the public to fully inform themselves and if necessary be in a good position to safeguard against various influences and internal pressures possibly allowing for collective and individual interests to take precedence at such meetings that may be contrary to informed public debate and best interest.

Yours Sincerely

Ian McLachlan

[1] Inquiry into the Status of CFS / M.E. and Research into Causes and Treatment: 6.3 How the Department for Work and Pensions Formulates CFS/ME Policy

http://www.erythos.com/gibsonenquiry/Docs/ME_Inquiry_Report.pdf

[2] Magical Medicine:How To Make a Disease Disappear

http://www.investinme.org/Documents/Library/magical-medicine.pdf

[3] A Summary of the Inherent Theoretical, Methodological and Ethical Flaws in the PACE Trial

http://www.theoneclickgroup.co.uk/documents/ME-CFS_res/

[4] PACE Trial Protocol: Final Version

http://www.meactionuk.org.uk/FULL-Protocol-SEARCHABLE-version.pdf

[5] Responses to PACE questions tabled by the Countess of Mar in the House of Lords

http://www.meactionuk.org.uk/Responses-to-PACE-questions-CoM.htm

[6] The PACE trial in chronic fatigue syndrome

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60684-3/fulltext

[7]Recent Correspondence With The Lance Regarding PACE

http://pacedocuments.blogspot.co.uk/2011/04/recent-correspondence-with-lancet-re.html

[8] REPORT: Complaint to the Relevant Executive Editor of The Lancet about the PACE Trial Articles Published by the Lancet

http://www.meactionuk.org.uk/COMPLAINT-to-Lancet-re-PACE.htm







From: foi-enquiries@qmul.ac.uk
Sent: Monday, May 14, 2012 4:54 PM
To: Ian
Subject: FOI Request 2012/60
Dear Mr. McLachlan

Thank you for your email of 15th April requesting information about the PACE Trial.

The Data Monitoring and Ethics Committee minutes are not held by Queen Mary, therefore we cannot supply these.

We do hold the other minutes but I'm afraid we cannot supply these to you. Having sought the opinion of the College's qualified person we are refusing your request under s.36 of the Freedom of Information Act 2000 - prejudice to the effective conduct of public affairs. This is because, in the qualified person's reasonable opinion, release of this information would be likely to inhibit the free and frank provision of advice or the free and frank exchange of views for the purposes of deliberation (s.36(2)(b)(i) and (ii)).

As per s.17, please accept this as a refusal notice.

If you are dissatisfied with this response, you may ask the College to conduct a review of this decision. To do this, please contact the College in writing (including by fax, letter or email), describe the original request, explain your grounds for dissatisfaction, and include an address for correspondence. You have 40 working days from receipt of this communication to submit a review request. When the review process has been completed, if you are still dissatisfied, you may ask the Information Commissioner to intervene. Please see www.ico.gov.uk for details.

Yours sincerely

Paul Smallcombe
Records & Information Compliance Manager



On 15/04/2012 1:39 PM, Ian wrote:
Records & Information Compliance Manager
Queens Building
Queen Mary
University of London,
Mile End Road
London
E1 4NS

Dear Compliance manager,

Under the terms stated within the Freedom of Information Act I would like you to supply me with copies of minutes from all meetings of the PACE Trial Steering Committee, Trial Management Group, Data Monitoring and Ethics committee.

Yours sincerely
Ian McLachlan
 

biophile

Places I'd rather be.
Messages
8,977
I would like to know what really drove the protocol changes too. PACE still have not published what they promised to on this issue. And the above FOI was refused due to "prejudice to the effective conduct of public affairs" and for being against the "public interest". So in other words, "politely f*ck off sir, the authors have impunity".

The explanation even suggests that the content of discussions would further enrage the ME/CFS community into detrimental extremism and that PACE were acutely aware/sensitive of the criticism they face to the point of self-editing the private discussions just in case it was released? Is this just an excuse to avoid further scrutiny? What did they discuss which could be so volatile as to (allegedly) fear reprisals? Keep in mind that we are talking about minutes ie brief edited summaries, not verbatim transcripts of everything said during the discussion!

The explanation also annoyingly and falsely claims that the PACE data has been "fully" published. Omission of employment data is probably the best example, especially considering that the trial was partly and unusually funded by the Department of Work and Pensions. If this does not get published, it will look very suspicious. Wouldn't surprise me if this occurred with the reasoning that publishing such data would encourage "prejudice to the effective conduct of public affairs" and be against the "public interest".
 

Esther12

Senior Member
Messages
13,774
Thanks Dolphin. That was interesting - it is all a bit 1950s old boys, but I'm coming to accept that's how Britain is in 2010.

PS: It could be a bit easier to read if you removed the extra quote around it all? Maybe not.

It would be good to be able to read the discussion which led to them redefining 'normal' levels of fatigue and disability, so that they overlapped with severe and disabling levels of fatigue.
 

Graham

Senior Moment
Messages
5,188
Location
Sussex, UK
They could just have written:

Dear Mr McLachlan

I am afraid that we cannot release the minutes as the people involved may end up looking bigoted and ignorant. As we all went to the same school, I must safeguard their interests. We will put the minutes in with the other records of correspondence in the files at Kew to be opened in 2079 or thereabouts.

Yours sincerely
.....
 

user9876

Senior Member
Messages
4,556
I suspect that they forgot about the FoI act and were less than guarded at times in there minuites.Their message pushes a lot of suspicion onto patient representatives suggesting they would be ashamed of their comments or at least the charities that they work for would be.

Worth asking for emails as well. I know people have approached the MRC who have denied having data but the statistition was working at the MRC so they should have copies of any e-mail discussions around the statistical analysis.

It would be interesting to know who their qualified person was and what their relationship was to the trial.

I think the information commission has been quite good in the past around forcing people to release information. They tried to get the government to release the risk register for the NHS reforms dispite similar excuses that civil servants need to be able to discuss their concerns without being in the public glaze. I remember hearing a case around BAT trying to get hold of reseach data from a University (Glasgow I think) around a smoking attitudes survey. Don't know what happened though.

I personally don't think academic freedom has anything to do with releasing data rather its the consquences of publishing controversial messages.Personally I would be happy to share how my work has been developed.

Maybe what we need is a whistle blower to post the data and minuites on wiki leaks.
 

Dolphin

Senior Member
Messages
17,567
Talking of which, with respect to the 40% or so claimed to have reached a 'normal' physical function score, you might expect, a priori, that this would be reflected by a normal performance on the only objective measure - the 6MWT - i.e. managing around 600 metres distance.

The mean scores might conceal a much greater improvement for this group and personally I would have presented a seperate analysis to highlight the point if this were the case.

I don't recall seeing such an analysis?
Only figures given were means (SDs) for baseline and 52 weeks for each of the four trial arms.

The SDs wouldn't suggest anything like 40% reached 600m+.
 

Dolphin

Senior Member
Messages
17,567
Yes, CBT was found to have a small effect size, but was 'clinically ineffective', based on the threshold of the 'clinical useful difference'.
I'm equating not meeting the threshold for a 'clinical useful difference' with it being 'clinically ineffective', which I'm pretty sure is justifiable.
I'm pretty sure that a 'clinical useful difference' equates to a 'clinical significant difference', and if a therapy does not have a 'significant' effect size, then it is justifiable to say it is ineffective.
The study was set up to see if CBT and GET were clinically effective, and CBT failed to prove to be clinically effective for SF-36 PF.

I'm not quite sure how to reconcile the study's finding that CBT was ineffective with the study's finding that 13% responded to CBT, except to say that on 'average' CBT was clinically ineffective. But it isn't necessary to include the word 'average' when stating what the effect size is.
(my bolding) 'clinical useful difference' comes from the 0.5S.D. I'm not sure if "clinical significant difference" is a synonym - I'm rusty on that but I suspect not (as I recall, CID, clinically important difference is a synonym). The word "significant" tends to be used as you probably know in significance testing (p values) and this isn't anything to do with p values. I'm not convinced you can use language the way you are using it, but perhaps I'm wrong.
 

Dolphin

Senior Member
Messages
17,567
You're correct Bob, they seem to be basing the statement of "moderate" effect on the clinical useful difference, and it can be deduced that "almost always" refers to CBT not reaching this threshold for physical function. And as you said, they do hide this fact in their conclusion by claiming that CBT and GET have a moderate effect size, without clarifying that CBT was not moderate in one of the two primary measures ie physical function.

Their "clinically useful difference" threshold has problems as it was a rather low due to a smaller than usual standard deviation in baseline scores. It led to the situation where an improvement of a mere 2 out of 33 points in fatigue was regarded as "moderate". As CBT and GET both showed an advantage of over 3 points on average, I'm surprised they didn't claim this to be a "large" effect. To demonstrate the problem further, imagine if the SMC group improved by 40 points and the CBT group by an additional 8 points. According to PACE logic, the effect of CBT would still be regarded as "moderate". Another example, imagine if the standard deviation of baseline scores was only 2 points because of a more homogeneous group in terms of fatigue severity and physical function score, then suppose the SMC group improved by 40 points and the CBT group by only an additional 8 points, the effect size for CBT would have been considered to be extremely large despite being relatively small.
As Biophile most likely knows, but some other people might not or may have forgotten, this point was published, so people don't have to try to explain it from scratch in the future if they don't want to, including in formal situations:

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60689-2/fulltext

The Lancet, Volume 377, Issue 9780, Page 1831, 28 May 2011​
>​
doi:10.1016/S0140-6736(11)60689-2 Cite or Link Using DOI

Published Online: 17 May 2011
The PACE trial in chronic fatigue syndrome

Jane Giakoumakis a
In their randomised trial of treatments for patients with chonic fatigue syndrome, Peter White and colleagues (March 5, p 823)1define a clinically useful difference between the means of the primary outcomes as “0·5 of the SD of these measures at baseline, equating to 2 points for Chalder fatigue questionnaire and 8 points for short form-36”. They cite achieving a mean clinically useful difference in the graded exercise therapy or cognitive behaviour therapy groups, compared with specialist medical care alone, as evidence that these interventions are “moderately effective treatments”.
The source for this definition of clinically useful difference states that such a method has a “fundamental limitation”: “estimates of variability will differ from study to study…if one chooses the between-patient standard deviation, one has to confront its dependence on the heterogeneity of the population under study”.2 In White and colleagues' study, we do not have heterogeneous samples on the Chalder fatigue questionnaire and short-form 36 physical function subscale, since both are used as entry criteria.1
Patients had to have scores of 65 or less on short-form 36 to be eligible for the study.1 However, most, in practice, would probably need to have scores of 30 or more to be able to participate in this clinic-based study. Indeed, only four of 43 participants in a previous trial of graded exercise therapy scored less than 30.3, 4 Guyatt and colleagues2 suggest that “an alternative is to choose the standard deviation for a sample of the general population”, which White and colleagues have given as 24.1 An SD of 24 gives a clinically useful difference of 12; both graded exercise therapy and cognitive behaviour therapy fail to reach this threshold. Whether they “moderately improve outcomes”, as claimed,1 is therefore questionable.



References

1 White PD, Goldsmith KA, Johnson AL, et alon behalf of the PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 2011; 377: 823-836. Summary | Full Text | PDF(309KB) | CrossRef | PubMed
2 Guyatt GH, Osaba D, Wu AW, et al. Methods to explain the clinical significance of health status measures. Mayo Clinic Proc2002; 77: 371-383. PubMed
3 Fulcher KY. Physiological and psychological responses of patients with chronic fatigue syndrome to regular physical activity.Loughborough: Loughborough University of Technology, 1997. http://hdl.handle.net/2134/6777. (accessed March 4, 2011).
4 Fulcher KY, White PD. Randomised controlled trial of graded exercise in patients with the chronic fatigue syndrome. BMJ 1997;314: 1647-1652. CrossRef | PubMed
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
(my bolding) 'clinical useful difference' comes from the 0.5S.D. I'm not sure if "clinical significant difference" is a synonym - I'm rusty on that but I suspect not (as I recall, CID, clinically important difference is a synonym). The word "significant" tends to be used as you probably know in significance testing (p values) and this isn't anything to do with p values. I'm not convinced you can use language the way you are using it, but perhaps I'm wrong.

Thanks Dolphin. I agree with you that a CUD and a CID are synonyms. Along with a MID, which is also the same.

CUD = clinically useful difference
CID = clinically important difference
MID = minimum important difference

In the strictest scientific sense, I'm not absolutely certain about the use of the term 'clinically ineffective' either but, to a lay person (the media, politicians, and patients etc.), I think it's safe and accurate to say 'clinically ineffective'. I wouldn't use the term 'ineffective', because the effect size, for CBT SF-36 Physical Function, was 'small' (That's not in the paper, but it's my own observation, based on a small effect size commonly being 0.2 to 0.5 SD). But the term 'clinically ineffective' is slightly different to 'ineffective', and it suggests that there was no useful clinical effect, so I think it's an accurate term for non-scientists, the media, politicians, and patients etc.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
As Biophile most likely knows, but some other people might not or may have forgotten, this point was published, so people don't have to try to explain it from scratch in the future if they don't want to, including in formal situations:

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60689-2/fulltext


The Lancet, Volume 377, Issue 9780, Page 1831, 28 May 2011


>

doi:10.1016/S0140-6736(11)60689-2 Cite or Link Using DOI
Published Online: 17 May 2011
The PACE trial in chronic fatigue syndrome

Jane Giakoumakis a
In their randomised trial of treatments for patients with chonic fatigue syndrome, Peter White and colleagues (March 5, p 823)1define a clinically useful difference between the means of the primary outcomes as “0·5 of the SD of these measures at baseline, equating to 2 points for Chalder fatigue questionnaire and 8 points for short form-36”. They cite achieving a mean clinically useful difference in the graded exercise therapy or cognitive behaviour therapy groups, compared with specialist medical care alone, as evidence that these interventions are “moderately effective treatments”.
The source for this definition of clinically useful difference states that such a method has a “fundamental limitation”: “estimates of variability will differ from study to study…if one chooses the between-patient standard deviation, one has to confront its dependence on the heterogeneity of the population under study”.2 In White and colleagues' study, we do not have heterogeneous samples on the Chalder fatigue questionnaire and short-form 36 physical function subscale, since both are used as entry criteria.1
Patients had to have scores of 65 or less on short-form 36 to be eligible for the study.1 However, most, in practice, would probably need to have scores of 30 or more to be able to participate in this clinic-based study. Indeed, only four of 43 participants in a previous trial of graded exercise therapy scored less than 30.3, 4 Guyatt and colleagues2 suggest that “an alternative is to choose the standard deviation for a sample of the general population”, which White and colleagues have given as 24.1 An SD of 24 gives a clinically useful difference of 12; both graded exercise therapy and cognitive behaviour therapy fail to reach this threshold. Whether they “moderately improve outcomes”, as claimed,1 is therefore questionable.


References

1 White PD, Goldsmith KA, Johnson AL, et alon behalf of the PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 2011; 377: 823-836. Summary | Full Text | PDF(309KB) | CrossRef | PubMed
2 Guyatt GH, Osaba D, Wu AW, et al. Methods to explain the clinical significance of health status measures. Mayo Clinic Proc2002; 77: 371-383. PubMed
3 Fulcher KY. Physiological and psychological responses of patients with chronic fatigue syndrome to regular physical activity.Loughborough: Loughborough University of Technology, 1997. http://hdl.handle.net/2134/6777. (accessed March 4, 2011).
4 Fulcher KY, White PD. Randomised controlled trial of graded exercise in patients with the chronic fatigue syndrome. BMJ 1997;314: 1647-1652. CrossRef | PubMed

I've been thinking about this, and researching it.

Using 0.5 SD is a perfectly acceptable and common practise. In fact, 0.5 SD might even be generous for determining a CUD, as it doesn't seem to be set in stone. I think the main problem with determining the CUD comes from using standard deviations with data that isn't normally distributed. And the normative SF-36 PF data is not normally distributed, so using the normative data, to calculate the effect size and the CUD, wouldn't give any more of a meaningful result than using the trial data. (But, there again, meaningfulness does not seem to be a primary concern of the PACE Trial authors.)

I've not found any research that indicates that the improvements needed for a 'moderate' effect size, specifically for SF-36 PF scores, should be any higher than what the PACE Trial uses. Apart from Guyatt, everything I've read seems to confirm the methodology of the PACE Trial paper, with regards to effect sizes and a CUD. It seems too low to me, and the use of standard deviations is totally inappropriate, but the authors seem to have used a common practise, however inappropriate and meaningless it is.

So apart from the argument against using standard deviations, I haven't come across any research which can be used against their methodology, except for Guyatt.

Unless anyone else has found any? (I'm always missing stuff on this thread, so this has probably been discussed thoroughly before!)
 

Dolphin

Senior Member
Messages
17,567
So apart from the argument against using standard deviations, there doesn't seem to be any research which can be used against their methodology.
I've a slight concern you're missing the point of that letter (maybe you're not): standard deviations are a measure of how spread out the values are from the mean. So some sets of data can have scores that are very close to the mean and some other sets of data will have scores that are quite spread out.

If a study decides scores have to be between certain ranges at baseline, that is decreasing the standard deviation. And hence decreasing what is required for a treatment to reach the CUD. I've a feeling I mentioned this example before but here goes: if one had two weight loss trials: one looked a intervention 1 and saw whether it produced a clinical important difference; for intervention 2, participants were restricted to be between a certain weight (e.g. 11 stone-13 stone = 154lbs-182lbs = 69.85kg-82.55kg). It will be much easier for intervention 2 to reach a CUD as the SD of the baseline scores will be quite small. The PACE Trial is like what happened in the trial of intervention 2.

What the reference the PACE Trial authors used suggested be done is using the standard deviation for the whole population.
 

Sean

Senior Member
Messages
7,378
I can't imagine what could possibly be generating that "animus".

And if I was feeling a little cynical, and I usually am, I might argue that it suits certain established players in this field to have new researchers discouraged from entering the field. Heaven knows what some bright young thing with a genuinely independent mind and sound ethics might discover and reveal to the world. Could be very inconvenient indeed to the powers that currently be.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I've a slight concern you're missing the point of that letter (maybe you're not): standard deviations are a measure of how spread out the values are from the mean. So sets of data can have scores that are very close to the mean and some other sets of data will have scores that are quite spread out.

If a study decides scores have to be between certain ranges at baseline, that is decreasing the standard deviation. And hence decreasing what is required for a treatment to reach the CUD. I've a feeling I mentioned this example before but here goes: if one had two weight loss trials: one looked a intervention 1 and saw whether it produced a clinical important difference; for intervention 2, participants were restricted to be between a certain weight (e.g. 11 stone-13 stone = 154lbs-182lbs = 69.85kg-82.55kg). It will be much easier for intervention 2 to reach a CUD as the SD of the baseline scores will be quite small. The PACE Trial is like what happened in the trial of intervention 2.

What the reference the PACE Trial authors used suggested be done is using the standard deviation for the whole population.

I do understand it Dolphin, and it's a helpful and interesting letter.
But using a standard deviation is only appropriate for normally distributed data. (Not that the PACE Trial statisticians seem to know this.)
If a standard deviation is used for data that is not normally distributed, such as the skewed and clipped distribution of the normative SF-36 PF scores, then the results will be meaningless.
I have been trying to look for a more meaningful definition of a CUD for SF-36 PF scores, but have not come across anything. And there doesn't seem to be a common methodology, for working out effect sizes, for data that is not normally distributed, as far as I can see.

In terms of the PACE Trial authors' understanding of statistics, and seemingly most other scientists, they would relate to the letter, so it is useful.

My basic point is that it is inappropriate to use standard deviations for SF-36 PF scores and Chalder scores. (But scientists don't seem to know this, or don't want to know it, so there's not much point in me going on about it.)
 

Dolphin

Senior Member
Messages
17,567
I wrote to NICE about our PACE analysis, and got back a reasonable but neutral response. Included in it were these lines:

In accordance with our processes this guidance will next be considered for review in August 2013. I can confirm that I have logged your email in our system so that it can be brought to the attention of the clinical guidelines team when they come to review the guidance. When the review takes place, our Information Services team will also conduct a thorough search of new published evidence, so we would also encourage you to make sure that as much information as possible is published in peer-reviewed journals before the review begins.

So, hmmm! Published in peer-reviewed journals. Now where have I heard that before? I'm sure someone somewhere much wiser than me said I ought to be doing that.

Not much chance of getting anything into The Lancet or BMJ of course. So, let's go wild and suppose that I rewrite the survey results, and include the graphics linking bimodal and Likert scoring for the fatigue scale, any advice or suggestions?

On another tack, I did think of producing a short series of notes on statistics aimed at medical analysis. Bob's seen a draft of one of them. Any thoughts about that? The aim is simply to go right back to basics and put things like averages, standard deviation, conditional probablility etc. into proper context. Perhaps I could build up a bit of statistical kudos that way.

It still has to remain a second string though. I haven't given up pushing the PACE analysis.
Some thoughts on peer-reviewed journals off the top of my head.

- You would want one where one didn't have to pay a submission fee, which tend to be reasonably substantial I think e.g. $1000-1500
- You ideally would want one that is open access, or open access after a period.

My impression is that requiring both would rule most journals out: most journals pay for themselves either through having a subscription fee (so generally won't be open access) or else they're open access but pay for themselves by having a submission fee.

Two other criteria come to mind:
- ideally PubMed-listed, although there can be ways around this i.e. getting something that wasn't published in a PubMed-listed journal up on PubMed Central (I'm going to try this myself).
- Finding journals that would publish such a piece.

To me the Bulletin of the IACFS/ME comes to mind. Except it's closing. So the new, "Fatigue: Biomedicine, Health and Behavior" journal http://www.iacfsme.org/Portals/0/pdf/IACFSMEJournal_Letter_to_Membership_final.pdf, which is already accepting submissions is the one that comes to mind. However, other people may have other suggestions.
 

user9876

Senior Member
Messages
4,556
Some thoughts on peer-reviewed journals off the top of my head.

- You would want one where one didn't have to pay a submission fee, which tend to be reasonably substantial I think e.g. $1000-1500
- You ideally would want one that is open access, or open access after a period

I don't think you need to subscribe to a journal to submit a paper to one. Not sure about the medical world I'm a computer scientist and I've published in journals that I've not subscribed to.
How ever if the journal is not open it makes it harder to make the work publically available.

I