• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

When and how can endpoints be changed after initiation of a RCT?

Dolphin

Senior Member
Messages
17,567
"Endpoints" doesn't simply mean timing but also (and more usually) the outcome measures that are used.

This is probably going to be of minority appeal - if you're interested in, for example, the fact that the PACE Trial authors changed how they said they'd analyse the data, you may find this of interest.

Free full text: http://clinicaltrials.ploshubs.org/article/info:doi/10.1371/journal.pctr.0020018

When and how can endpoints be changed after initiation of a randomized clinical trial?

PLoS Clin Trials. 2007 Apr 13;2(4):e18.

Evans S.

Center for Biostatistics in AIDS Research, Department of Biostatistics, Harvard School of Public Health, Harvard University, Boston, Massachusetts, United States of America. PMID:17443237
 

Dolphin

Senior Member
Messages
17,567
Just bits I underlined for myself in case anyone is interested:

Introduction

Endpoints are outcome measures used to address the objectives of a clinical trial. The primary endpoint is the most important outcome and is used to assess the primary objective of a trial (e.g., the variable used to compare the effect difference of two treatment groups). A fundamental principle in the design of randomized trials involves setting out in advance the endpoints that will be assessed in the trial [1], as failure to prespecify endpoints can introduce bias into a trial and creates opportunities for manipulation. However, sometimes new information may come to light that could merit changes to endpoints during the course of a trial. This new information might include, for example, results from other trials or identification of better biomarkers or surrogate outcome measures. Such changes can allow incorporation of up-to-date knowledge into the trial design. However, changes to endpoints can also compromise the scientific integrity of a trial. Here I discuss some of the issues and decision-making processes that should be considered when evaluating whether to make changes to endpoints, and discuss the documentation and reporting of clinical trials that have revised endpoints.

Guiding Principles
The principle consideration when evaluating whether to modify an endpoint is whether the decision is independent of the data obtained from the trial to date. If the decision to revise endpoints is independent of the data from the trial, then such revisions may have merit. In fact, Wittes [8] encourages consideration of changes in long-term trials, as medical knowledge evolves or when assumptions made in design of the trial appear questionable. Wittes further argues that researchers may consider changes to the primary endpoint when the trial has airtight procedures to guarantee separation of the people involved in making such changes from data that could provide insight into treatment effect [8].

When Is a Decision Independent of Data?
To evaluate whether a change in endpoint is independent of data from the trial, investigators and reviewers should ask three important questions. First, what is the source of the new information that elicits consideration of the change in endpoints? If the source is external to the trial in question, for example arising from results from another trial, then the revision of endpoints may be credible. Second, have interim data on the endpoint (or related data) from a trial been reviewed? If trial data have not been reviewed, then the revision of endpoints may again be credible. Third, and most importantly, who is making the decision regarding endpoint revision (e.g., trial sponsors or an independent external advisory committee)? Appropriate decision makers should have no knowledge of the endpoint (or related trial data) results. In particular, if interim analyses have been conducted, the decision makers should not have knowledge of those data. Note, however, that even if no formal interim analysis has been conducted, any impressions that the investigators may have of the trial to date may influence decisions regarding changes in endpoints. For example, investigators may have a sense of the endpoint result or a related variable even though formal analysis of the endpoint has not been conducted. An investigator may notice changes in certain patients at his or her site and may attribute these changes to the investigational medication.. This can be particularly problematic in unblinded trials. For these reasons, study sponsors, investigators, and DMCs may not be appropriate decision makers for endpoint revisions.

Appropriate Decision Makers
Since the decision to revise endpoints should be independent of the trial data, a DMC that has reviewed interim data may not be appropriate for making decisions regarding endpoint revisions. Even DMC review of pooled data can suggest treatment effects (e.g., in a two-group comparison study of response rates, a very high pooled response implies a relatively high response rate in both groups). In this case, trial leadership may wish to convene an external advisory committee that has not reviewed data from the trial to assess the potential impact on the integrity of the trial and to make recommendations regarding endpoint revision.

Scientific Relevance
It is also important to consider the scientific relevance of the endpoints in question. Does the current state of knowledge make the results of the current trial uninformative or inefficient? Is the trial now scientifically uninteresting or irrelevant? If so, then changing endpoints may be constructive, and perhaps even ethically necessary, to ensure that the study generates a scientific contribution. For example, new scientific questions may arise after recently completed trials have already answered the original question of interest. Also, better biomarkers or surrogates may have been identified, or there may have been changes in regulatory oversight.

One should be cautious of potential operational bias induced by the revision of endpoints. Operational bias is created when the conduct of clinical investigators or participants is changed by knowledge (or perceived knowledge) of trial data. Knowledge of revisions to endpoints may influence the actions of clinical investigators or participants as they anticipate the reasons for such revisions. For example, if a decision to change the primary endpoint is made, then participating clinicians and patients may believe that such a change was made due to a lack of efficacy of the intervention. This belief may affect their willingness to participate, affecting accrual and retention.

Documentation and Reporting
If the trial leadership decides to modify endpoints, then appropriate documentation is crucial. Changes should be described in amendments to the protocol and the analysis plan. The registry record for the trial should also be updated.

Changes in endpoints should also be declared when submitting a manuscript to a journal, so that the results can be properly evaluated. Reporting of a clinical trial with any modified endpoint should include: (1) a clear statement describing the fact that information obtained after trial initiation led to the change in endpoint; (2) a description of the reasons (e.g., whether the endpoint was suggested by the data) and decision procedure (e.g., who made the decision and whether data were unblinded); (3) a discussion of the potential biases induced by the change of the endpoints; (4) if warranted, (i.e., if the decision to add endpoints was not independent of the data), a disclaimer that the results should be interpreted with caution and should be confirmed in future trials; and (5) a report of the reasons for excluding endpoints from the analyses and whether this was independent of trial data. Addressing these items will help ensure clarity and transparency of the analyses, enable the evaluation of the independence of the endpoint revision and trial data as well as the potential for selective reporting, allow assessment of the ramifications of the endpoint revision, and help avoid overinterpretation of the data. Researchers may further consider focusing on descriptive analyses using confidence intervals rather than hypothesis testing to avoid overstating the significance of the results.

Hawkey [12] suggests that journals require submission of the protocol alongside manuscripts describing clinical trial results, to help ensure that the reported endpoints indeed reflect what was defined at the start of the trial. Several journals have adopted this policy, including PLoS Clinical Trials, The Lancet, and the British Medical Journal. Other journals are considering a requirement to submit raw data (see the Harvard School of Public Health's Workshop on Assuring the Integrity of Reporting and Patient Safety in Therapeutic Trials; http://www.biostat.harvard.edu/events/sc?hering-plough/agenda.html). Notably, for industry-sponsored studies, the Journal of the American Medical Association is requiring that analyses be conducted by an independent statistician at an academic institution, in part to protect against post hoc endpoint revisions.
 

Dolphin

Senior Member
Messages
17,567
I have copied a lot of the sections/paragraphs from the original article but not all.

Conclusions

Revisions to endpoints (particularly primary endpoints) should be uncommon. If not appropriately evaluated, such revisions lead to misguided research and suboptimal patient care. If, however, important scientific knowledge has been gained after a trial begins, then this knowledge should be carefully and responsibly evaluated for incorporation into the trial. We should be open-minded and flexible in situations that may warrant the revision of endpoints and apply appropriate decision-making and reporting procedures when such situations arise.
 

Dolphin

Senior Member
Messages
17,567
Relevance to PACE Trial protocol changes

It doesn't look to me like there were particular changes in knowledge about ME/CFS that would generally have justified the changes the authors made between the (published) PACE Trial protocol and the final paper.
 

oceanblue

Guest
Messages
1,383
Location
UK
I haven't read the paper, only what was posted here but I have a few questions and comments based on this:

Do the change in endpoints in the paper refer to changing the outcomes measured, or the way that outcome is reported? PACE continued to use SF36 and CFQ as primary outcomes but changed from categorical to continous reporting.

The changes were approved by the Trial Steering committee who would not have had seen any interim data and are independent of the trial investigators (that's the point of it) so would meet the criteria for appropriate people to approve the change, which they did. But:
It doesn't look to me like there were particular changes in knowledge about ME/CFS that would generally have justified the changes the authors made between the (published) PACE Trial protocol and the final paper.
Agreed, so I don't know why the trial committee did agree to the changes. I'd love to see the documentation for the meeting where these changes were agreed.

Changes in endpoints should also be declared when submitting a manuscript to a journal, so that the results can be properly evaluated. Reporting of a clinical trial with any modified endpoint should include: (1) a clear statement describing the fact that information obtained after trial initiation led to the change in endpoint; (2) a description of the reasons (e.g., whether the endpoint was suggested by the data) and decision procedure (e.g., who made the decision and whether data were unblinded); (3) a discussion of the potential biases induced by the change of the endpoints; (4) if warranted, (i.e., if the decision to add endpoints was not independent of the data), a disclaimer that the results should be interpreted with caution and should be confirmed in future trials; and (5) a report of the reasons for excluding endpoints from the analyses and whether this was independent of trial data.
The authors seemed to have done all of this apart from point 3. Point 1 is flaky - they cited a paper on assesssing trial stats that was published after their protocol, but I'm not sure this paper actually said anything that hadn't been in the literature available when they wrote the protocol. They didn't address the issues very well but they did address them in the paper. I think they have, as usual, been very slippery here - they haven't done everything right but they've done enough right to get away with it (while completely breaching the spirit of such guidelines).
 

Dolphin

Senior Member
Messages
17,567
Do the change in endpoints in the paper refer to changing the outcomes measured, or the way that outcome is reported? PACE continued to use SF36 and CFQ as primary outcomes but changed from categorical to continous reporting.
No, as I recall, it doesn't make any point on such an issue. To me, that's still a change and I don't recall reading anything that said otherwise.
The piece is only 3 pages and there are no numbers so other people could read the full piece if they preferred.

Two specific points: they also went from bimodal scoring to Likert scoring with one of the primary measures so not quite simply a change from categorical to continuous scoring.

Also, one of the secondary outcome measures was:
The Chalder Fatigue Questionnaire Likert scoring (0,1,2,3) will be used to compare responses to treatment [27].
so it could be called substituting a primary outcome measure with a secondary outcome measure. Put another way, the data that was presented as a primary outcome measure for fatigue could still have been presented if they stuck to published protocol.[/QUOTE]

The changes were approved by the Trial Steering committee who would not have had seen any interim data and are independent of the trial investigators (that's the point of it) so would meet the criteria for appropriate people to approve the change, which they did.
If you mean by "independent of the trial investigators" that it did not include the trial investigators, that is not correct, although it is easy to miss it on the PACE Protocol website:
Trial Steering Committee (TSC)

The Trial Steering Committee (TSC) is responsible for the independent oversight of the progress of the trial, investigation of serious adverse events, and determining the future progress of the trial in the light of regular reports from the DMEC. The TSC is composed of:

Professor Janet Darbyshire (Chair),

Professor Jenny Butler (occupational therapist),

Professor Patrick Doherty (physiotherapist),

Dr Stella Harris (patient representative),

Dr Meirion Llewelyn (consultant physician in infectious diseases), and

Professor Tom Sensky (liaison psychiatrist and CBT therapist).

-----
Observers include:

Professor Mansel Aylward (previously of DWP),

Mr Chris Clark (Action for M.E.),

Peter Craig (Scottish Executive),

Dr. Moira Henderson (DWP)

Susan Lonsdale (DH) and

Dr Sarah Perkins (MRC),

Professor Stephen Stansfeld (Queen Mary University of London, on behalf of the sponsor).

Dr Alison Wearden (Principal Investigator of the FINE trial, an MRC funded, sister study to PACE also researching CFS/ME).

-----
Other members include the three investigators, the trial statisticians, and the trial manager (secretary to the committee).

Membership has been approved by the MRC.

Previous members/observers include:

Dr Robin Buckle (MRC)

Professor Clair Chilvers (R&D, DH)

I also have minutes from meetings where the three investigators were at them (and not listed as observers).

Indeed at the end, the one I'm looking at has at the end
MS, PW and TC 24/4/04 Minutes
revised 16/5/2004
so the PIs look like they were writing up the minutes.

oceanblue said:
Changes in endpoints should also be declared when submitting a manuscript to a journal, so that the results can be properly evaluated. Reporting of a clinical trial with any modified endpoint should include: (1) a clear statement describing the fact that information obtained after trial initiation led to the change in endpoint; (2) a description of the reasons (e.g., whether the endpoint was suggested by the data) and decision procedure (e.g., who made the decision and whether data were unblinded); (3) a discussion of the potential biases induced by the change of the endpoints; (4) if warranted, (i.e., if the decision to add endpoints was not independent of the data), a disclaimer that the results should be interpreted with caution and should be confirmed in future trials; and (5) a report of the reasons for excluding endpoints from the analyses and whether this was independent of trial data.
The authors seemed to have done all of this apart from point 3. Point 1 is flaky - they cited a paper on assesssing trial stats that was published after their protocol, but I'm not sure this paper actually said anything that hadn't been in the literature available when they wrote the protocol. They didn't address the issues very well but they did address them in the paper. I think they have, as usual, been very slippery here - they haven't done everything right but they've done enough right to get away with it (while completely breaching the spirit of such guidelines).
No, the authors didn't do all of them apart from (3).

One of the aims of the trial was to assess safety. They didn't report the outcome measures they said they would there or mention they had changed them.

Adverse outcomes
Adverse outcomes (score of 57 of the self-rated CGI) will be monitored by examining the CGI at all follow-up assessment interviews [49] (they didn't give us CGI scores of 5 - they were combined in with 3 and 4 which I don't believe counts as it is no listed as "no change" and we don't get the percentage for it). An adverse outcome will be considered to have occurred if the physical function score of the SF-36 [28] has dropped by 20 points from the previous measurement (they didn't give us this information: they changed this to: "decrease of 20 or more between baseline and any two consecutive assessment interviews). This deterioration score has been chosen since it represents approximately one standard deviation from the mean baseline scores (between 18 and 27) from previous trials using this measure [23,25]. Furthermore, the RN will enquire regarding specific adverse events at all follow-up assessment interviews.

There was at least one other change I can recall (apart from outcome measures they didn't give):
An operationalised Likert scale of the nine CDC symptoms of CFS
They didn't give us Likert scores for the total symptoms or the two symptoms they listed; they just gave presence/absence.

Also, with regard to the primary outcome measures, as well as giving continuous scoring, they could still have given the score for improvement (see third section):
The 11 item Chalder Fatigue Questionnaire measures the severity of symptomatic fatigue [27], and has been the most frequently used measure of fatigue in most previous trials of these interventions. We will use the 0,0,1,1 item scores to allow a possible score of between 0 and 11. A positive outcome will be a 50% reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue [27].

The SF-36 physical function sub-scale [29] measures physical function, and has often been used as a primary outcome measure in trials of CBT and GET. We will count a score of 75 (out of a maximum of 100) or more, or a 50% increase from baseline in SF-36 sub-scale score as a positive outcome. A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population [51,52].

Those participants who improve in both primary outcome measures will be regarded as overall improvers.
We are instead given figures for improvement which is defined in a different way

A clinically useful diff erence between the means of
the primary outcomes was defi ned as 05 of the SD of
these measures at baseline,31 equating to 2 points for
Chalder fatigue questionnaire and 8 points for short
form-36. A secondary post-hoc analysis compared the
proportions of participants who had improved between
baseline and 52 weeks by 2 or more points of the Chalder
fatigue questionnaire, 8 or more points of the short
form-36, and improved on both.
which when reported in the results sounds like improvement scores (I'm not saying people would think it was the same as the protocol, but the wording is similar)
64 (42%) of 153 participants in the APT group improved
by at least 2 points for fatigue and at least 8 points for
physical function at 52 weeks, compared with 87 (59%) of
148 participants for CBT, 94 (61%) of 154 participants for
GET, and 68 (45%) of 152 participants for SMC. More
participants improved after CBT compared with APT(p=00033) or SMC (p=00149)
, and more improved with
GET compared with APT (p=00008) or SMC (p=00043)
;
APT did not diff er from SMC (p=061; webappendix p 2).