• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Goldsmith et al piece on PACE Trial inc. link to free txt (help sought to explain it)

Dolphin

Senior Member
Messages
17,567
I just happened to come across the following:


How do treatments for chronic fatigue syndrome work? Exploration of instrumental variable methods for mediation analysis in PACE a randomised controlled trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care

Kimberly Goldsmith1*, Trudie Chalder1, Paul White2, Michael Sharpe3 and Andrew Pickles1

* Corresponding author: Kimberly Goldsmith

Author Affiliations
1 Institute of Psychiatry, King's College London, DeCrespigny Park, London, SE5 8AF, UK
2 Wolfson Institute of Preventative Medicine, Queen Mary, University of London, EC1M 6BQ, UK
3 Department of Psychiatry, University of Oxford, OX3 7JX , UK

For all author emails, please log on.
Trials 2011, 12(Suppl 1):A144 doi:10.1186/1745-6215-12-S1-A144

The electronic version of this article is the complete one and can be found online at: http://www.trialsjournal.com/content/12/S1/A144

Published: 13 December 2011

It's free at: http://www.trialsjournal.com/content/12/S1/A144

It's late here so on a first read I'm having difficulty understanding it. Perhaps together we can break it down.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
I don't really understand it either since the actual Oral presentation isn't provided.

The conclusion is interesting. Apparently there were no instrumental variables (IVs) that were stong mediators of improvement. A maximum correlation coefficient of R=0.03 shows these variables are unlikely to have any (practical) significance whatsoever.
 

Sean

Senior Member
Messages
7,378
I am struggling to understand much of that one at all. Seriously beyond my math level.

Though whatever it is they are measuring/calculating it doesn't look like the final numbers give the PACE crowd much comfort.

Frankly, if they are not able to get a clear substantial result from basic statistical tests and analysis of the raw data, then it is unlikely they will get one from it using more advanced subtle methods. We are not talking about a 10 000 variable matrix here, like some gene analyses, where a 1% difference between variables can be important. A good solid result from PACE type trials should stand out without needing sophisticated stats work (or dubious manipulations of definitions and statistical thresh holds).

How many ways can you fail to find a practically meaningful increase in distance walked or hours worked? If it ain't there, it just ain't there.
 

oceanblue

Guest
Messages
1,383
Location
UK
Good luck with this, I don't have the energy to look in detail but here are a few comments:

What makes mediation analysis special is that it aims to establish causal relationships. It seems to be related to Structural Equation Modelling and Path Analysis, techniques covered in the final, most difficult section of my Biostatistics textbook (not a good sign) and I never got that far.

The first few pages of "Mediation Analysis in Social Psychology: Current Practices and New Recommendations" might be helpful, and possibly this critique of Meditation analysis too.

It might not be worth the effort, though, given the limited findings:
There was modest mediation of CBT and GET effects (approximately 20% of the total effect).
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
I don't honestly expect my previous post would accomplish anything useful.

I've had a quick look around and it seems to me that the authors don't doubt that CBT and GET are effective but are trying to understand perhaps what the common intervening (mediating) variables might be.

The following abstract is from a text for using instrumental variable analysis to explore why certain social policy instruments might achieve the hoped for social behaviours. By randomly assigning certain groups to with/without policy conditions you can tell that a policy works in changing the desired behaviour but you don't know why it works. Knowing why might help find more effective ways of intervening (that is the intervention effectiveness might be improved).

In the case of PACE they believe that changing thought processes via CBT and GET (both emphasise that ME/CFS is benign and reversible) results in an improvement in the measured outcomes of fatigue and physical function. It may be that much of the improvement (sic) however is due to a mediating behaviour (e.g. getting out of the house more) so perhaps encouraging people to 'get out more' might be a more direct and cost effective treatment than costly CBT therapy or supervised GET.

So they run a range on models in which they insert a mediating variable that they can measure and correlate with the input and output variables (roughly speaking) and any strong co-variance might suggest a likely mediating variable.

They apparently didn't find any strong candidates which restricts their ability to make their CBT/GET routine more cost effective.

That's my take anyway.

Abstract

One strategy for discovering the connections between social policy interventions and behavioral
outcomes is to conduct social experiments that use random assignment research designs. Although
random assignment experiments provide reliable estimates of the effects of a particular
policy, they do not reveal how a policy brings about its effects. If policymakers had answers to
the ?how? questions, they could design more effective interventions and make more informed
policy trade-offs. This paper reviews one promising approach to specifying the causal paths by
which impacts are expected to occur: instrumental variables analysis, a method of estimating the
effects of intervening variables ? also called mediating variables, or mediators ? that link interventions
and outcomes. It explores the feasibility of applying this approach to data from random
assignment designs, reviews the policy questions that can be answered using the approach,
and outlines the conditions that have to be met for the effects of mediating variables to be estimated.
 

Dolphin

Senior Member
Messages
17,567
Thanks for input. Will read now.

Just to say that I've spotted what I think is an error ...
However, I don't reckon they'll be giving me a Nobel prize for thinking that one of the authors was called Peter White, not Paul White.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
Just to add that one behavioural variable I'm sure they didn't consider in their models is any social desirability bias when responding to subjective self rating questionnaires of fatigue and physical functioning following therapies that aim to encourage participants to think of their symptoms as trivial and transient and that they will recover.

As I recall participants' ratings of therapists was generally high. I would expect this mediator to account for much more than 20% of the variance.
 

Dolphin

Senior Member
Messages
17,567
Just to add that one behavioural variable I'm sure they didn't consider in their models is any social desirability bias when responding to subjective self rating questionnaires of fatigue and physical functioning following therapies that aim to encourage participants to think of their symptoms as trivial and transient and that they will recover.

As I recall participants' ratings of therapists was generally high. I would expect this mediator to account for much more than 20% of the variance.
One statistic which makes me thinks CBT might have influenced what the participants reported is:
Participants who received CBT reported slightly fewer such events [Non-serious adverse events] than did those in the APT (p=00081) and SMC (p=00016) groups.
 
Messages
13,774
Just to add that one behavioural variable I'm sure they didn't consider in their models is any social desirability bias when responding to subjective self rating questionnaires of fatigue and physical functioning following therapies that aim to encourage participants to think of their symptoms as trivial and transient and that they will recover.

As I recall participants' ratings of therapists was generally high. I would expect this mediator to account for much more than 20% of the variance.

I'm sure that they would have addressed such concerns... otherwise that could be just promoting a placebo dressed up in quackery!
 

Dolphin

Senior Member
Messages
17,567
This is how one person has interpreted the findings. They said it could be re-posted:

Looks like being in receipt of ill-health benefits or pension
or in dispute/negotiation of benefits or pension or having membership of a self-help group, or having "bad" thoughts or movement (behaviour), or using the 'wrong' brand of washing-up liquid, might only be a very weak CAUSE of "CFS".

http://www.trialsjournal.com/content/12/S1/A144

A failed attempt to dress up intuition and misanthropy as science.

-----

(A reminder of the nature of PACE investigators advice to the Insurance Industry
and Government)

Some persons appear to exaggerate symptoms but this is often hard to prove.

Unhelpful information is found in "self-help" (!) books and increasingly on
the Internet (see for example www.meassociation.org.uk).

Unfortunately, doctors and especially "specialist private doctors" and
complementary therapists may be as bad.

Other social factors that perpetuate illness shaping functional are anger with
the person or organisation the illness is illness attributed to, or toward the
insurer for not believing them.

It has been pointed out that: "if you have to prove you are ill you can't
get well".

Both State and private insurers pay people to remain ill.

In practice, even if treatment is available, there may be obstacles to
recovery.

Over time, the patient's beliefs may be become entrenched
and be driven by anger and the need to explain continuing disability.

The current system of state benefits, insurance payments and litigation remain
potentially major obstacles to effective rehabilitation.

It is often unrealistic to expect medical treatment alone to overcome these.

Furthermore patient groups who champion the interest of individuals
with functional complaints (particularly for chronic fatigue and
fibromyalgia) are increasingly influential; they are extremely effective in
lobbying politicians and have even been threatening towards individuals and
organisations who question the validity and permanence of the illness they
champion.

Again the ME lobby is the best example.
 

Sean

Senior Member
Messages
7,378
Furthermore patient groups who champion the interest of individuals with functional complaints (particularly for chronic fatigue and fibromyalgia) are increasingly influential; they are extremely effective in lobbying politicians and have even been threatening towards individuals and organisations who question the validity and permanence of the illness they champion.
We have been deliberately, persistently, systematically and very effectively disempowered, marginalised and stigmatised, for decades, almost certainly by the very people who are giving that advice, and who hold virtually all of the important advisory positions in this field. Yet they try to make out we are the ones in the driving seat, with massive amounts of undue influence?

Who do the insurance companies, governments, media, etc, go to for advice on ME/CFS? Not us patients.

A genuinely Orwellian use of language.
 

Dolphin

Senior Member
Messages
17,567
Note: I wasn't concentrating and didn't post the first part of the last post, which was the part I wanted to highlight. Also the reason for the rest of it might not have been clear without it. Anyway, I have edited it in now.
 

CBS

Senior Member
Messages
1,522
"IV methods were applied by compiling a list of baseline variables that could act as IVs in interaction terms with treatment arm and then assessing these using OLS with the mid-treatment measurement of the putative mediator as the outcome. "

I don't see how any meaningful interpretation of this "article" can be made without a list of which baseline variables were used as instrumental variables.

Secondly, while I have not done instrumental variable analysis myself, a quick read of the Wiki entry on IV analysis (http://en.wikipedia.org/wiki/Instrumental_variable) would suggest that IV's present at least the possibility of exaggerating one's impression of the treatment effect by over emphasizing "response" of those who may have had some positive change as opposed to those who showed no benefit from treatment.

The standard IV estimator can recover local average treatment effects (LATE) rather than average treatment effects (ATE).[9] Imbens and Angrist (1994) demonstrate that the linear IV estimate can be interpreted under weak conditions as a weighted average of local average treatment effects, where the weights depend on the elasticity of the endogenous regressor to changes in the instrumental variables. Roughly, that means that the effect of a variable is only revealed for the subpopulations affected by the observed changes in the instruments, and that subpopulations which respond most to changes in the instruments will have the largest effects on the magnitude of the IV estimate.

According to Imbens and Angrist (1994):
LATE is the average treatment effect for individuals whose treatment status is influenced by changing an exogenous regressor that satisfies an exclusion restriction.

In layman's terms, LATE is typically used in social science research where randomization of subjects to treatment groups is impractical or unethical "and where controlled experiments are not available." It is designed to identify subgroups of patients where the treatment effect may have had an impact that otherwise would have gone undetected. This should raise questions about whether or not IV analysis was appropriate for the PACE data and whether or not the use of IV analysis met the necessary assumption "that the causal effect of interest does not vary across observations."

It would take a bit more digging but I'm left wondering what the authors could have possibly used as "instrumental variables" and if IV analysis was even remotely appropriate. That said, if the R-squared really was 0.03, it would seem that this would raise even larger questions about the efficacy of CBT and GET to increase activity or reduce fatigue given the potential that the PACE trial violates the underlying assumptions of IV analysis and that IV analysis exaggerates the influence of only those subjects whose behavior actually changed.
 

oceanblue

Guest
Messages
1,383
Location
UK
Just to add that one behavioural variable I'm sure they didn't consider in their models is any social desirability bias when responding to subjective self rating questionnaires of fatigue and physical functioning following therapies that aim to encourage participants to think of their symptoms as trivial and transient and that they will recover.

As I recall participants' ratings of therapists was generally high. I would expect this mediator to account for much more than 20% of the variance.
An interesting point, though of course as they made no measure of social desirability bias they couldn't include it in their model - so it wouldn't have an influence on the model findings (unless social desirability bias was correlated with one of the baseline variables they did include).

I'm sure there's some (non-CFS) research out there where researchers did include independent measures of social desirability bias, to help them interpret the results of questionnaires etc. I think they even recommended this should be standard practice, but of course I can't remember where I saw the research. Such an independent measure of social desirability bias would have been really helpful in making sense of the PACE results.
 

Dolphin

Senior Member
Messages
17,567
An interesting point, though of course as they made no measure of social desirability bias they couldn't include it in their model - so it wouldn't have an influence on the model findings (unless social desirability bias was correlated with one of the baseline variables they did include).

I'm sure there's some (non-CFS) research out there where researchers did include independent measures of social desirability bias, to help them interpret the results of questionnaires etc. I think they even recommended this should be standard practice, but of course I can't remember where I saw the research. Such an independent measure of social desirability bias would have been really helpful in making sense of the PACE results.
Yes, some (non-CFS) research does use such measures.

I started a thread on a paper on this here http://forums.phoenixrising.me/show...ability-response-bias-in-self-report-research .
Here's the first post:
Faking it: Social desirability response bias in self-report research. Aust J Adv Nursing. 2008;25:408.
Free full text at: http://www.ajan.com.au/Vol25/Vol_25-4_vandeMortel.pdf

I thought it was interesting as it:
(i) gives some examples of the bias
(ii) shows that some studies actually try to control for it.

Given the amount of questionnaires that are used in ME/CFS research, this may have some relevance.

ABSTRACT

Objective

The tendency for people to present a favourable image of themselves on questionnaires is called socially desirable responding (SDR). SDR confounds research results by creating false relationships or obscuring relationships between variables. Social desirability (SD) scales can be used to detect, minimise, and correct for SDR in order to improve the validity of questionnairebased research. The aim of this review was to
determine the proportion of health-related studies that used questionnaires and used SD scales and estimate the proportion that were potentially affected by SDR.

Methods

Questionnaire-based research studies listed on CINAHL in 2004-2005 were reviewed. The proportion of studies that used an SD scale was calculated. The influence of SDR on study outcomes and the proportion of studies that used statistical methods to control for social desirability response bias are reported.

Results

Fourteen thousand two hundred and seventy-five eligible studies were identified. Only 0.2% (31) used an SD scale. Of these, 43% found SDR influenced their results. A further 10% controlled for SDR bias when analysing the data. The outcomes in 45% of studies that used an SD scale were not influenced by SDR.

Conclusions

While few studies used an SD scale to detect or control for SD bias, almost half of those that used an SD scale found SDR influenced their results.

Recommendations

Researchers using questionnaires containing socially sensitive items should consider the impact of SDR on the validity of their research and use an SD scale to detect and control for SD bias.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
When I was a psych undergrad we covered quite a bit on questionnaire development, validation, reliability etc and one of the problems in designing them is to try to avoid issues such as social desirability bias, acquiesence bias (where people always agree with a statement regardless of the content) and even lying (perhaps again relating to social desirability).

One way around it is to include items in the questionnaire that set out to detect these biases rather than pertaining directly to the subject under study (e.g. "I have never lied in my life" in clearly a false statement and anyone agreeing with it is likely to also give false answers to other questions).

Getting back to PACE its a slightly different matter as they are not being asked to respond to statements but to self rate their levels of fatigue and physical function. There is no easy means here to include 'lie scale' or other items to measure any social desirability bias so if such bias was to be discounted from the results some other means would have to be found.

Perhaps if each individual's rating of their therapist and the trial therapy arm had been recorded these could be modelled as mediating variables but I doubt they have.

Which again leads us back to the only way to discount these potential biases when dealing with cognitive therapies and subjective outcome measures is to use objective activity measures.
 

Dolphin

Senior Member
Messages
17,567
I notice Kimberly Goldsmith is the second name on the Lancet PACE Trial paper suggesting a major role. I wonder how much she was involved with and/or agreed with the changes made, definition of normal functioning, etc.