• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

New attempt to avoid releasing data on 'recovery' from PACE

Esther12

Senior Member
Messages
13,774
There's also the problem with conflating what mathematicians would consider a fair 'normal range' for disability with what the lay media would understand when being told that treatments got patients 'back to normal'.
 

Dolphin

Senior Member
Messages
17,567
The raw data isn't provided, but the graph shows a sharp cutoff at 100 (and reports the ceiling effect), and the mean/SD for different age ranges:

http://health.adelaide.edu.au/pros/...llbeing_south_australian_population_norms.pdf

For the 35-44 age group, the 25th percentile PF score is 90 - and this is what I'd consider the cutoff for 'normal'.

At 45-54, it drops to 80. (increased prevalence of illness - arthritis, type 2 diabetes etc)
I think they shouldn't collapse it into a single value but have different thresholds based on people's different ages. Once they are defined, it does the same thing.
(Alternatively one could get a single value for an equivalent general/healthy population i.e. weighting it to make it like the study population).
 

Dolphin

Senior Member
Messages
17,567
As I've argued before, I have problems with people satisfying recovery if they are within a certain amount of the mean (or median): If they mostly have worse scores than the mean/median, it doesn't suggest they are a population like others in the general or healthy population.

So what I think needs to be done is to take people out of this recovered group, until one has a group where there isn't a statistical difference between them.

However, even then, one might still end up with groups that have lower means so continuing taking out individuals until the means match up with the general population might be a better way to go.
 

Dolphin

Senior Member
Messages
17,567
One way of summarising one of the points user9876 is making might be where one has a sort of cliff-effect where, say, there is a huge difference (in real terms) between a score of 70 and a score of 80 on a scale, with a much smaller difference between a score of 60 and a score of 70 (or 80 and 90).

My point on this: this might not be so important when comparing means of different intervention groups (but could be in some situations), but might be very important when trying to claim certain values represent recovery.

* These numbers were chosen at random
 

Dolphin

Senior Member
Messages
17,567
But if using a well-defined population, my understanding is that the common methodology for a 'normal range' is to use +/-2SD, which cuts of the top and bottom 2.5% of values. So it includes 95% of the population.

Edit: This seems like a sensible definition of a 'normal range' to me, but I'm not sure if the general healthy population can be considered a good example of a 'well-defined' population.
Not sure if I've said this before (in another thread), but this reminds of what I understand happened with a debate with TSH values, used to measure thyroid functioning. Normal values of up to around 5 were set. However these have had to be reduced as abnormal thyroid functioning is so common in the population that not all in the middle 95% have normal thyroid functioning.
Similarly one can't use 95% of the population in terms of functioning as so many have problems with health and functioning that couldn't be considered normal functioning.
 

Dolphin

Senior Member
Messages
17,567
Also, I remember somebody pointing out that in one of the minutes of the PACE Trial meetings, they wanted a reasonable gap between positive outcome and the entry criteria - if anyone has that, it would be useful to have the reason given.
One person wrote to me off list on this:


The PACE Trial
First Meeting of Trial Steering Committee
22 April 2004

Review of the Protocol
The major points were as follows:
"7. The outcome measures were discussed. It was noted that they may need to be an adjustment of the threshold needed for entry to ensure improvements were more than trivial. For instance a participant with a Chalder score of 4 would enter the trial and be judged improved with an outcome score of 3. The TSC suggested one solution would be that the entry criteria for the Chalder scale score should be 6 or above, so that a 50% reduction would be consistant with an outcome score of 3. A similar adjustment should be made for the SF-36 physical function sub-scale. It was also suggested that as well as measuring the proportions of participants who improved in fatigue and functioning seperately, we ought to also look at the proportions who improve on both."



----------------------------------------------------------------------------------------


Request for Substantial Amendment 5.1 to the PACE Trial made to the West Midlands MREC, 20th February, 2006


“The mean SF-36 sub-scale score in previous secondary care trials was about 45, with a standard deviation of about 20. The mean SF-36 physical function subscale score for the UK working age general population has been shown to be 85 with a standard deviation of 15. Increasing the threshold will improve generalisation since we are currently excluding disabled patients from the trial who are offered similar treatments involved in the CFS clinics involved in the trial. The TMG and TSC believe this will also make a significant impact on recruitment.

This would mean the entry criterion on this measure was only 5 points less than the categorical positive outcome of 70 on this scale. We therefore propose an increase of the categorical positive outcome from 70 to 75, reasserting a 10 point score gap between entry criterion and positive outcome. The other advantage of changing to 75 is that it would bring the PACE trial into line with the FINE trial, an MRC funded trial for CFS/ME and the sister study to PACE. This small change is unlikely to influence power calculations or analysis.”
The person who sent me this added:
The one that says 'Substantial Amendment' is from a request for Substantial Amendment to the WMREC, not a PACE TSC or TMG meeting. This means that the WMREC was potentially given false information by the PACE investigators in approving that request since they have recently stated that they have no intention of doing that analysis.
 

Esther12

Senior Member
Messages
13,774
One way of summarising one of the points user9876 is making might be where one has a sort of cliff-effect where, say, there is a huge difference (in real terms) between a score of 70 and a score of 80 on a scale, with a much smaller difference between a score of 60 and a score of 70 (or 80 and 90).

My point on this: this might not be so important when comparing means of different intervention groups (but could be in some situations), but might be very important when trying to claim certain values represent recovery.

* These numbers were chosen at random

Yeah - I agree. This would matter less is the criteria for recovery were similar to what you mentioned in post #43, and it becomes increasingly likely to be a problem as looser criteria are used.

Ta for the other info too.
 

user9876

Senior Member
Messages
4,556
I think they used EUROqol for cost-benefit analysis, not sf36.

They do but the also quote the other measures. There are some interesting questions around the accuracy of the EQ-5d scale. But they are very different. Basically there are questions around the way they convert the different dimentions into a single utility value. A cost effective treatment in one country may not be cost effective in another because each country would have different weightings. Also I would argue that there is error in their estimate of the utility function which where there are only small benefits (such as reported with PACE) these errors may become significant. Its an argument that casts doubt on the reliability of the result.

1. Agreed that mean and SD not appropriate when looking at the general population.

But should be OK for comparing the means of two patient groups eg SMC vs CBT?

There is an argument around it being in appropriate to calculate the mean and std form the general population distribution. Which in turn they use to calculate the normal levels.

2. Not an interval scale
Yes, pretty tough to make an interval scale. I'm pretty sure there is no evidence SF36 is an interval/ratio scale and that's true of most questionnaires generally. Every now and then statisticians complain this invalidates some statistical interpretations, but this view never seems to get much traction :)

I suspect it is an ordered scale rather than simply giving classes so had some utility. Also, even if some patients disagree to about some of the ordering the main thing measured is within-subject score (pre/post) and so patients presumably score themselves consistently there.

Also, with the SF36 for modest change most item scores don't change at all ie remain 'not limited at all' or 'limited a lot' (which includes 'impossible'). Probably only 2 or 3 ex 10 questions change for most people and in each case they are moving from:
- limited a lot > limited a little, or
- limited a little > not limited
not sure if this helps make things more consistent or not.

As you say some statistitions complain but they fail to get traction but I think we should keep trying. To my mind it becomes particularly important as the gains are small.

Its very hard to design a good scale to me this suggests the need to quote individual values rather rather than lumping lots of stuff into an adhoc scale. The fatigue scale is much worse than SF-36.

I think the point about changes is that say you fill in the questionaire and come out at 60. Now you have some treatment and have a very small improvement. Now that very small improvement may allow you to change your answers to 3 questions hence getting a score of 75.This would be because a number of the particular set of questions chosen for the SF-36 scale fall around this region of physical activity.

Now a different person who came in on 50 may make the same improvement and only be able to change the answer to one question. Hence moving to 55. Again due to how the questions test different aspects of physical function.

So this means that the change is sensitive or to be accurate may be sensitive to the stating point.

Given a fair distribution of the groups over the scale it may not be important since these errors may cancel out. But I'm not sure about this. I seem to remember that the different groups in PACE also had differing starting means so the errors may become important.

The argument is though, one of accuracy rather than one suggesting their results are too high (I think this follows from an argument of some groups being giving a pychological framing that they better hence give better answers). The point about accuracy is that it should feed into significance tests although I have no clue as to how this would work.
 

user9876

Senior Member
Messages
4,556
Not sure if I've said this before (in another thread), but this reminds of what I understand happened with a debate with TSH values, used to measure thyroid functioning. Normal values of up to around 5 were set. However these have had to be reduced as abnormal thyroid functioning is so common in the population that not all in the middle 95% have normal thyroid functioning.
Similarly one can't use 95% of the population in terms of functioning as so many have problems with health and functioning that couldn't be considered normal functioning.

I think there is a problem defining normal simply by taking a statistical model. Normal suggests that some mechanism is working correctly. If as Dolphin points out you just take an overall population you don't know if they all or most have a working mechanism.

The medical world seems not to take a great deal of care of how statistics are used
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
Sorry Esther. I don't think you'll like this answer (or non-answer). From the House of Lords:

The Countess of Mar (Crossbench)
To ask Her Majesty's Government whether the refusal by researchers to publish trial data on recovery rates and positive outcome rates specified in their application for grant funding provided by the Department of Health, the Medical Research Council, the Scottish Office and the Department for Work and Pensions for "Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial" contravenes the agreement the researchers entered into when the award was made.

Lord Marland (Conservative) Dept. Business Innovation and Skills
The PACE trial: A Randomised Controlled Trial of CBT, graded exercise, adaptive pacing and usual medical care for the chronic fatigue syndrome, was funded by a Medical Research Council (MRC) grant to Queen Mary, University of London.
While the MRC strongly encourages the publication and dissemination of the findings of all MRC-funded research it does not require the publication of underlying research data. As an MRC grant, the study was subject to the RCUK (Research Council UK) and MRC terms and conditions; no additional requirements around publication were specified by the other funders of the study.
The findings of the PACE study have been reported in The Lancet, in March 2011 (published online in February 2011) and in PLOS ONE in August 2012. These papers included the results of analyses of positive outcome rates.
The MRC is aware that Queen Mary University London received a request under the Freedom of Information Act relating to data on recovery rates and positive outcome rates which relates to an analysis initially planned by the investigators in the original protocol for the study and which was published in 2007. It is understood that the request was declined by the university as this originally planned analysis was superseded and therefore not undertaken during the study.

Hansard 26 November 2012: http://www.theyworkforyou.com/wrans/?id=2012-11-26a.15.0&s=chronic+fatigue+syndrome#g15.2
 

Esther12

Senior Member
Messages
13,774
Ta Fire. I do worry about the people who write those non-answers. Surely they have to hate themselves. I never would have expected anyone at the MRC or in government to be interested in making more public data so that patients could be better informed though.
 

user9876

Senior Member
Messages
4,556
Ta Fire. I do worry about the people who write those non-answers. Surely they have to hate themselves. I never would have expected anyone at the MRC or in government to be interested in making more public data so that patients could be better informed though.
Its not quite a non statement as it says that the MRC are aware that they are ignoring the original protocol. I don't know whether the lack of any reports of action of the MRC can be assumed to be agreement. Just the fact that the MRC knows should put political pressure on the MRC as debates happen about drug trial protocols. For example if the commons health select committee were to ask the MRC whether all the research they funded published the data as specified in the original protocol they would have to say no.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
Its not quite a non statement as it says that the MRC are aware that they are ignoring the original protocol. I don't know whether the lack of any reports of action of the MRC can be assumed to be agreement. Just the fact that the MRC knows should put political pressure on the MRC as debates happen about drug trial protocols. For example if the commons health select committee were to ask the MRC whether all the research they funded published the data as specified in the original protocol they would have to say no.

I think as a patient, I would want to see data for the alleged theory that 'Got ME? Get out and exercise!' and all the support that headline (I think it was a headline - or was very similar in tone) received from the authors. I think if I were to return to the answer afforded in the House of Lords - it would be a legitimate concern to express. If there is evidence that exercise can be as effective as it was sold to be - then we'd all like to see GET in place across the country. So 'put up or shut up'.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
From January 2013, the BMJ says that it will require a commitment from trialists to make their data available on reasonable request.
Dr Fiona Godlee and Dr Trish Groves (BMJ Editor and Deputy Editor) have said that this decision has been made because "it is no longer possible to pretend that a report of a clinical trial is sufficient to allow full independent scrutiny of the results."

BMJ articles:
http://group.bmj.com/group/media/la...sts-believe-data-should-be-more-easily-shared
http://www.bmj.com/content/345/bmj.e7980


It seems that Ben Goldacre has been working with Dr Sarah Wollaston MP (Totnes) (Con), who has already raised this issue (access to trial data) in parliament, and has had a response (and an invitation to a meeting) from the govt. See the Hansard entry here:
http://www.publications.parliament....121023/debtext/121023-0001.htm#12102347000762

In the govt response to Dr Sarah Wollaston, the govt makes a reference to European regulations. The EU is said to be looking at some issues in relation to medical trials, but I'm not sure exactly what Europe is proposing, and they might not be proposing full public access to data. In any case, in a blog by Ben Goldacre, he points out that any future European regulations will not give us access to data from past trials, but it will only relate to future trials, so it doesn't help us for PACE:
http://www.badscience.net/2012/10/questions-in-parliament-and-a-briefing-note-on-missing-trials/
 

Sean

Senior Member
Messages
7,378
The MRC is aware that Queen Mary University London received a request under the Freedom of Information Act relating to data on recovery rates and positive outcome rates which relates to an analysis initially planned by the investigators in the original protocol for the study and which was published in 2007. It is understood that the request was declined by the university as this originally planned analysis was superseded and therefore not undertaken during the study.

There is no reason not to release the data now, if they plan no further use of it themselves.

Failure to release all data after finishing an analysis is scientific fraud in my opinion, as claims arising from a single source data analysis cannot be independently tested.

Shitty, shabby, shonky stuff.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Queen Mary, University of London

Research Data Management Policy

http://www.arcs.qmul.ac.uk/policy_zone/research/QMUL_research_data_management_policy_2012.pdf

"Most grant applications for research
which will generate digital data sets require a data management plan that meets the
2011 Research Councils UK (RCUK) policy; this states that: ‘Publicly funded
research data are a public good, produced in the public interest, which should be
made openly available with as few restrictions as possible in a timely and responsible
manner that does not harm intellectual property:’..."
...
"RCUK now require all funded universities to have a data management policy and
road map that will meet their expectations for data sharing in place by 1 May 2012
with full implementation by 2015. This is to ensure we make it clear how publicly
funded data can be accessed for at least ten years after publication. Our policy
should deliver the following criteria:
publicly funded research data should be made openly available in a timely
manner;
data with acknowledged long term value should be made accessible;..."
...
"Policy
Where possible publicly funded research data should be made available for access and re-use."
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
There is no reason not to release the data now, if they plan no further use of it themselves.

Failure to release all data after finishing the analysis is scientific fraud in my opinion, as claims arising from the data analysis cannot be independently tested.

Shitty, shabby, shonky stuff.
A1 Sean.

If they don't release the entire (redacted) data set, the case for considering the entire trial to be scientifically inadmissable is open and shut as far as I'm concerned. Ignoring the parliamentary answers and all the rest of the spin, and the details of wording and timing of the MRC's policy, if they refuse to release the data then this 'study' has no claim whatsoever to call itself "Science". Period. "Appeal to authority" is rightly supposed to be a fallacy, ergo this kind of pseudo-scientific publication has no basis for its claim to be 'scientific' to any rational mind.

Here is the MRC's actual policy on open data:

http://www.mrc.ac.uk/Ourresearch/Ethicsresearchguidance/datasharing/Policy/index.htm


The OECD Principles and Guidelines for Access to Research Data from Public Funding (2007) promotes a culture of openness and sharing to increase “the return on public investments in scientific research,” exchange of good practice, awareness of the costs, benefits and restrictions on sharing.

The MRC policy is also consistent with the Research Councils’ Common Principles on Data Policy which in turn reflect the OECD principles.

Our data-sharing policy applies to all MRC-funded research. It does not prescribe when or how researchers should preserve and share data but requires them to make clear provision for doing so when planning and executing research. The policy was approved by the Council in 2005. This September 2011 version includes some minor changes that do not alter the intent of the policy.
Policy statement

The MRC expects valuable data arising from MRC-funded research to be made available to the scientific community with as few restrictions as possible so as to maximize the value of the data for research and for eventual patient and public benefit. Such data must be shared in a timely and responsible manner.

Note also the RCUK's Principles:

http://www.rcuk.ac.uk/research/Pages/DataPolicy.aspx

Publicly funded research data are a public good, produced in the public interest, which should be made openly available with as few restrictions as possible in a timely and responsible manner that does not harm intellectual property.

The original PACE funding approval was obtained just before the MRC's policy came in (conveniently enough). But PACE's funding has been renewed throughout the years since that policy was established 7 years ago. Queen Mary's are in possession of all the relevant raw data. Only their lack of political will prevents them from releasing that data. It's my opinion that the MRC would quite likely wish for them to release it - that would certainly be in accordance with the MRC's present policy. It's quite possible that public pressure might persuade the MRC to put pressure on Queen Mary's and the PACE authors to release this data. It's good that all this is now on the record in parliament, but a campaign to ask the MRC to demand the release of the data seems most appropriate to me.

We paid for the collection of this data: it does not legitimately belong to the authors or to Queen Mary's for them to spin it however they wish. It was publicly funded. The data rightly belongs to us, and only legal anachronisms have allowed them to keep it from us. If the PACE authors want to claim scientific authority, what do they have to fear from releasing their data? Why should they expect us to trust their biased interpretation of that data, and accept it as "science"? Do they not wish us to behave scientifically and follow the data? If not, why not? What legitimate scientist has anything to fear from the release of the raw data which they claim supports their published findings? If the PACE trial wishes to claim any kind of authority as "science" then the data must be released. If they insist on keeping the data secret, then let nobody argue that PACE has any legitimate claim to call itself "science". Biased commentary on secret data is not a bona fide scientific publication, however prestigious the journal that rubber-stamps it: without open data, in a case like this, I see nothing but a pseudo-scientific scam.
 

Holmsey

Senior Member
Messages
286
Location
Scotland, UK
Can't go into detail but after some enquiries I have it that the recovery data is currently with an unnamed medical Journal, currently under peer review, and is then expected to be published. No doubt if it is then it'll recieve the same or more scrutiny as the data released so far.

I also have it that this situation was passed back to whoever made the FOI request for that same data, Hooper? Mar? Other??, as part of the reason for non disclosure but I haven't come accross any reference to that on this or other sites (at least as yet)

Can anyone corroberate or refute?

Thanks,
 

Esther12

Senior Member
Messages
13,774
Holmsey: my understanding is that they have created a new criteria for 'remission' (details in first post of this thread), but that they do not want to release the 'recovery' data as it was laid out in the original protocol. It is this remission data which is now under peer review. This could have changed, but if so, I do not think anyone has been informed (there's a link in the first post to a web-page that should update info about about the FOI).

Hopefully they will have changed their mind following the FOI and publicity, and added the recovery data to their original paper, but I've not seen any evidence of this yet.
 

user9876

Senior Member
Messages
4,556
Can't go into detail but after some enquiries I have it that the recovery data is currently with an unnamed medical Journal, currently under peer review, and is then expected to be published. No doubt if it is then it'll recieve the same or more scrutiny as the data released so far.

I also have it that this situation was passed back to whoever made the FOI request for that same data, Hooper? Mar? Other??, as part of the reason for non disclosure but I haven't come accross any reference to that on this or other sites (at least as yet)

Can anyone corroberate or refute?

Thanks,

In their FoI response they state
With regards to the recovery rates: the criteria thresholds for measuring
recovery in the Trial were changed in the light of more detailed
consideration of previous published studies (making the Trial’s analyses
either consistent with these studies or more stringent) and in the light
of newly published work (on the normal range of fatigue in the U.K.
population). These changes were made before analysing any data. A paper
that includes all analyses on recovery is currently under review by a
peer-reviewed journal. This was submitted for publication two months ago
and we expect to know whether this has been accepted for publication by
the end of this year. If accepted, we would expect this to be published in
the first three months of 2013.
http://www.whatdotheyknow.com/request/pace_trial_recovery_rates_and_po

So they do say they are publishing something they call recovery but not what they originally said recovery would be defined as. Of course the peer reviewers should insist that they publish rates as defined in the original protocol especially as the study was not blinded.

Expect a dodgy definition of recovery and some headline figures on the assumption people won't read beyond the headlines.