• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial paper: Measurement error, time lag, unmeasured confounding etc.

Dolphin

Senior Member
Messages
17,567
I think I will leave this to others to try to tackle

Free full text: https://ora.ox.ac.uk/objects/uuid:e2cb5c9a-3661-4a84-be5e-816e453eea9b/datastreams/ATTACHMENT01

https://ora.ox.ac.uk/objects/uuid:e2cb5c9a-3661-4a84-be5e-816e453eea9b

Reference: Goldsmith, KA, Chalder, TC, White, PD et al., (2016). Measurement error, time lag, unmeasured confounding: considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials. Statistical Methods in Medical Research.

Citable link to this page:

Title: Measurement error, time lag, unmeasured confounding: considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials

Abstract:

Clinical trials are expensive and time-consuming and so should also be used to study how treatments work.

This would allow evaluation of theoretical treatment models and refinement and improvement of treatments.

Treatment processes can be studied using mediation analysis.

Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator – outcome relationship remains one that can be subject to bias.

In addition, mediation is assumed to be a temporally ordered longitudinal process, but most mediation studies to date have been cross-sectional and unable to explore this assumption.

This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues.

In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator – outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator – outcome relationship over time.

Results showed that allowing for measurement error and unmeasured confounding were important.

Concurrent rather than lagged mediator – outcome effects were more consistent with the data, possibly due to the wide spacing of measurements.

Assuming a constant mediator-outcome relationship over time increased precision.

Publication status: In press
Peer Review status: Peer reviewed
Version: Accepted manuscript
Funder: Medical Research Council

Funder: Department for Health for England

Funder: The Scottish Chief Scientist Office

Funder: Department for Work and Pensions

Notes: © The Author(s) 2016. This article has been accepted for Statistical Methods in Medical Research.

About The Authors
Goldsmith, KA More by this author

Chalder, TC More by this author

White, PD More by this author

Sharpe, Michael More by this author
Oxford, MSD, Psychiatry
St Cross College
 
Messages
13,774
The study’s strengths lay in the use of high quality data stemming from a rigorously conducted trial

hmmm....

Also, given the apparent simultaneous early change in mediators and outcomes in PACE
it may be fruitful to collect more measurements earlier in the process to clarify mediator and outcome trajectories.

Maybe, seeing as your 'mediator' was a questionnaire about fear of activity, and your outcome was a questionnaire about physical functioning, the simultaneous change showed that it was not fear of activity that was mediating the change in partipant's SF36 Physical Functioning scores?

As @Simon had pointed out.

I only skimmed through without trying to take it all in (I planned to only look for interesting new results, but ended up reading more than I planned). This bit from the discussion seemed to summarise their interpretation of things:

While lagged mediator – outcome paths would be more consistent with a
causal effect, models with contemporaneous mediator – outcome relationships fitted better. Assuming a constant mediator – outcome relationship over time was plausible and brought a large gain in precision. Our findings here using longitudinal measures supported our earlier finding using a single measure of both mediator and outcome that fear avoidance mediated the effect of treatment on physical functioning.

The superiority of the simplex over the autoregressive models and the results of the simulation study clearly showed it was important to account for measurement error in the mediator, perhaps more so than accounting for unmeasured confounding. Models with lagged mediator – outcome relationships followed the classical measurement error paradigm where error dampens effects and so taking account of it increased the magnitude of effects On the other hand, the contemporaneous mediator – outcome effects were smaller in the simplex models as compared to the autoregressive models. Complex effects of accounting for measurement error in multi-equation models have been noted previously 11. Measurement error was accounted for in this study by using the simplex models, but it is also important to try to do more to address this issue through improved measurement of mediators and outcomes..

Accounting for measurement error led to a small loss in precision. Instrumental variables analysis (IV) is another method for coping with confounding and measurement error in predictor variables. It has proven difficult to apply IV methods to mediation analysis so far, mainly due to the absence of strong instruments, leading to imprecise mediator – outcome estimates 4, 14. In our experience, the use of these repeated measures measurement error models as an alternative to IV has led to much smaller losses in precision.

Lagged mediator – outcome relationships, which would be more consistent with the temporal ordering of a causal process such as mediation, were not supported in the PACE data. This could have been due to the apparent almost simultaneous change in mediator and outcome in these data 31. However, it could also be because the first measurement of the mediator was taken too late to capture mediator change prior to change in the outcome. The mid-treatment measurements were taken after participants had received approximately seven sessions of therapy, which may be quite late in the process of change. For example, evidence of gains in the first three sessions of brief psychosocial therapy interventions has been demonstrated for depression 49. Studies looking more carefully at the trajectories of mediator and outcome change by taking earlier and more frequent measures of the variables, perhaps even at every session of therapy, could clarify optimal timing and number of measurements.

The potential for unmeasured confounding of the mediator – outcome relationship was allowed for in models through covariances between mediator and outcome errors. The best fitting models were those with contemporaneous error covariances, suggesting there were unmeasured confounders of the mediator and outcome variables at the same time point that needed to be taken into account. Lagged covariances would have been more consistent with unmeasured confounding in a typical ‘simple’ mediator model with one measure of the mediator taken earlier acting on a single later measure of the outcome. There was less evidence for this sort of unmeasured confounding in the PACE data, although this may not be the case in other situations. Allowing for unmeasured confounding is desirable given the attention this issue has been given in the literature, with the approach described here providing one option. More generally, in practice there is no single best approach and, for example, the approach here could be extended to incorporate existing sensitivity analysis methods 22-24 to quantify the level of confounding that would alter these longitudinal model conclusions.

The assumption of equal mediator – outcome relationships over time led to greater precision in these estimates. The idea that no matter how or when the mediator is changed it will have the same effect on the outcome is a potentially strong and theoretically appealing assumption. This assumption aligns well with a description of mediation used in programme theory and intervention evaluation. These fields have described mediation analysis as evaluating both an ‘action theory’ – the a path in Figure 1 where an intervention seeks to change a mediating variable, and a ‘conceptual theory’ – the b path in Figure 1, which is the causal relationship between the mediator and outcome 50. Describing the b path as the ‘conceptual theory’ fits with thinking of this as a stable relationship existing in nature that can be manipulated by the ‘action theory’ or intervention. This implies that the ‘conceptual theory’ relationship exists in the absence of the intervention and should exist at different points in time. The support of both action and conceptual theories provides evidence for mediation. From a statistical point of view, this assumption led to a large increase in mediator – outcome effect precision, which in turn led to more precise mediated effects. When plausible, making this assumption could be important given the often low power to detect mediated effects 51.

The study’s strengths lay in the use of high quality data stemming from a rigorously conducted trial, as well as the availability of multiple measurements of mediators and outcomes allowing for more complex models. It was only possible to allow for unmeasured confounding in these models because of the availability of multiple measurements. The single measurement each of mediator and outcome generally used to evaluate mediation do not allow for fitting of a model with mediator - outcome covariance as such a model is not identified. It is also more difficult to account for measurement error in these single measure models, although it can be done if the reliability of the measure is known. At least four measurements are needed for identification of all parameters in the simplex models. Clinical trials often only take baseline, post-treatment and follow-up measurements, but the mid-treatment measures taken in PACE made it possible to allow for more paths and to
explore assumptions in models. Also, given the apparent simultaneous early change in mediators and outcomes in PACE
32 it may be fruitful to collect more measurements earlier in the process to clarify mediator and outcome trajectories.

Having additional repeated measures of mediator and outcome and/or different measurements of the mediator and outcome at each time point could allow for the exploration of further model assumptions. Furthermore, the methods used here had some particular strengths. Much of the causal mediation literature has focused on the issue of unmeasured confounding, however, both the simulation results in this study and previous mediation model results 14 suggest that in mental health measurement error may be of even greater concern than unmeasured confounding. The approach taken here addresses both measurement error and the induced unmeasured confounding that correlated measurement error can give rise to simultaneously , while avoiding the need to estimate complex sensitivity parameters. As such, this approach provides for another type of sensitivity analysis. In addition, it is likely much easier to gain information about the reliability of measurement, such as we have done here using repeated measures, than it is in most situations to identify and measure all important confounders. This being said, we do not see the approach taken here and the sensitivity analyses described in the literature 15, 16, 22-24 as mutually exclusive. For example, the approaches using SEM to study sensitivity to unmeasured confounding 22-24 could be incorporated into the sorts of measurement models that we have fitted here.
 
Messages
2,158
The PACE researchers seem to have swallowed whole and undigested a pile of jargon I very much doubt Chalder and White understand.

What they ignore is the fundamental of all statistical data analysis:

'Rubbish in, 'rubbish out'

In other words if you feed a sophisticated statistical model a set of rubbish data, the conclusions you draw will also be rubbish.

Or have I missed something????
 
Last edited:
Messages
15,786
The PACE researchers seem to have swallowed whole and undigested a pile of jargon I very much doubt Chalder and White understand.
My guess would be that they want to do something dodgy in the future, or justify doing something dodgy in the past, and want to have a published paper on the subject which they can cite as an "authority". Then they can pretend that the claims in that paper give them permission to do dodgy things.
 
Messages
2,158
Hi TiredSam - I've just deleted that bit about my antiquated qualifications - realised as soon as I wrote it, it sounded pompous. I was just feeling so pissed off with the PACE researchers milking their nonsense for yet another load of b***sh**.

Cheered up enormously by the Naviaux metabolomics study released today. Now that's what I call real science.
 

TiredSam

The wise nematode hibernates
Messages
2,677
Location
Germany
Hi TiredSam - I've just deleted that bit about my antiquated qualifications - realised as soon as I wrote it, it sounded pompous. I was just feeling so pissed off with the PACE researchers milking their nonsense for yet another load of b***sh**.

Cheered up enormously by the Naviaux metabolomics study released today. Now that's what I call real science.
Didn't sound pompous at all, I'm always heartened by how well-educated some forum members are.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
It is surprising that they have no mention of the major limitation - that they didn't use objective measures of functioning throughout the trial. If they used such measures, then perhaps they would have been able to discover something useful. Instead we have what, the third or fourth mediation analysis study, the latest of which is just coming up with excuses as to why the previous mediation studies weren't any good.

To the authors: We'd be more impressed if you designed and conducted the study well in the first place.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Oh yippee! What utter, unbridled joy! Another PACE trial meditation analysis! :meh:

They published a cross-sectional mediation analysis paper previously and had said that they planned to publish this longitudinal analysis, so we were expecting it.

I haven't read this paper yet, but I wonder if this line from the abstract sums up their findings: "Concurrent rather than lagged mediator – outcome effects were more consistent with the data, possibly due to the wide spacing of measurements."

It subtly suggests that they found no useful longitudinal mediation effects, but that all the changes (in outcomes and supposed mediators) were concurrent.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Goldsmith et al. said:
Lagged mediator – outcome relationships, which would be more consistent with the temporal ordering of a causal process such as mediation, were not supported in the PACE data. This could have been due to the apparent almost simultaneous change in mediator and outcome in these data 31.
So there was an "almost simultaneous change in mediator and outcome in these data", which means there was no evidence of a casual relationship between (supposed) mediators and the outcomes in this study. This is the same outcome as the previous mediation analysis.

So there's nothing to see here... But they have an awfully long-winded way of going about telling us that the study demonstrated no mediation effects! (Caveat: I still haven't read the full paper, so i may be missing something.)
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Just for my records...

Goldsmith KA; Chalder TC; White PD; Sharpe M; Pickles A.
Measurement error, time lag, unmeasured confounding: considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials.
Statistical Methods in Medical Research.
2016
 

user9876

Senior Member
Messages
4,556
Measurement error? Do they mean the whole study? ;)


I don't see how you can measure measurement error without knowing ground truth. With questionnaires there will be different types of errors such as bias, serial correlation in how people answer as well as simple randomness (and then make a guess at its distribution). Flicking through the paper they just seem to include a term for it in their model.

What is perhaps more interesting is the lack of independence in data where people answer similar questions.