• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Some papers on researcher bias/allegiance in psychiatry

Esther12

Senior Member
Messages
13,774
I have these open, but am not going to be able to read them today, so thought I'd post them up. v Only the last one links to the full paper, and I've not yet searched to see if the others are available for free on-line somewhere.

They could all be rubbish - I was reading something which assumed that these findings were all quite uncontroversial... but lots of false claims as seen as uncontroversially true by some.

One on therapist's tendency to over-estimate their own efficacy:

An investigation of self-assessment bias in mental health providers.

Walfish S, McAlister B, O'Donnell P, Lambert MJ.
Source

Department of Psychology, Brigham Young University, 272 Taylor Building, Provo, UT 84602, USA.
Abstract

Previous research has consistently found self-assessment bias (an overly positive assessment of personal performance) to be present in a wide variety of work situations. The present investigation extended this area of research with a multi-disciplinary sample of mental health professionals. Respondents were asked to: (a) compare their own overall clinical skills and performance to others in their profession, and (b) indicate the percentage of their clients who improved, remained the same, or deteriorated as a result of treatment with them. Results indicated that 25% of mental health professionals viewed their skill to be at the 90th percentile when compared to their peers, and none viewed themselves as below average. Further, when compared to the published literature, clinicians tended to overestimate their rates of client improvement and underestimate their rates of client deterioration. The implications of this self-assessment bias for improvement of psychotherapy outcomes are discussed.

http://www.ncbi.nlm.nih.gov/pubmed/22662416

Some on the tendency of researchers to find that the therapy they like most is most effective in RCTs they run, even when others find different results:

Researcher allegiance in psychotherapy outcome research: An overview of reviews

  • a Institute of Social and Preventive Medicine, University of Bern, Switzerland
  • b Institute of Psychology, University of Kassel, Germany
  • c Institute of Psychology, University of Freiburg, Germany

Abstract

Researcher allegiance (RA) is widely discussed as a risk of bias in psychotherapy outcome research. The relevance attached to RA bias is related to meta-analyses demonstrating an association of RA with treatment effects. However, recent meta-analyses have yielded mixed results. To provide more clarity on the magnitude and robustness of the RA-outcome association this article reports on a meta-meta-analysis summarizing all available meta-analytic estimates of the RA-outcome association. Random-effects methods were used. Primary study overlap was controlled. Thirty meta-analyses were included. The mean RA-outcome association was r = .262 (p = .002, I2 = 28.98%), corresponding to a moderate effect size. The RA-outcome association was robust across several moderating variables including characteristics of treatment, population, and the type of RA assessment. Allegiance towards the RA bias hypothesis moderated the RA-outcome association. The findings of this meta-meta-analysis suggest that the RA-outcome association is substantial and robust. Implications for psychotherapy outcome research are discussed.

http://www.sciencedirect.com/science/article/pii/S0272735813000275#gr1

Is the allegiance effect an epiphenomenon of true efficacy differences between treatments? a meta-analysis.

Munder T, Flückiger C, Gerger H, Wampold BE, Barth J.
Source

Institute of Social and Preventive Medicine, University of Bern, Switzerland. tmunder@ispm.unibe.ch
Abstract

Many meta-analyses of comparative outcome studies found a substantial association of researcher allegiance (RA) and relative treatment effects. Therefore, RA is regarded as a biasing factor in comparative outcome research (RA bias hypothesis). However, the RA bias hypothesis has been criticized as causality might be reversed. That is, RA might be a reflection of true efficacy differences between treatments (true efficacy hypothesis). Consequently, the RA-outcome association would not be indicative of bias but an epiphenomenon of true efficacy differences. This meta-analysis tested the validity of the true efficacy hypothesis. This was done by controlling the RA-outcome association for true efficacy differences by restricting analysis to direct comparisons of treatments with equivalent efficacy. We included direct comparisons of different versions of trauma-focused therapy (TFT) in the treatment of posttraumatic stress disorder (PTSD). RA was measured from the research reports. Relative effect sizes for symptoms of PTSD were calculated. Random effects meta-regression was conducted. Twenty-nine comparisons of TFTs from 20 studies were identified. Initial heterogeneity among relative effect sizes was low. RA was a significant predictor of outcome and explained 12% of the variance in outcomes. The true efficacy hypothesis predicted the RA-outcome association to be zero; however, a substantial association was found. Thus, this study does not support the true efficacy hypothesis. Given findings from psychotherapy research and other fields that support a biasing influence of researcher preferences, RA should be regarded as a causal factor and conceptualized as a threat to the validity of conclusions from comparative outcome studies.
(c) 2012 APA, all rights reserved.

http://www.ncbi.nlm.nih.gov/pubmed/22946981


© 2009 American Psychological Association. Published by Wiley Periodicals, Inc., on behalf of the American Psychological Associ
ation.
All rights reserved. For permissions, please email: journalsrights@oxon.blackwellpublishing.com

54

Blackwell Publishing Inc
Malden, USA
CPSP
Clinical Psychology: Science and Practice
0969-5893
© 2009 American Psychological Association. Published by Blackwell Publishing on behalf of the American Psychological Associatio
n. All rights reserved. For permission, please email: journalsrights@oxon.blackwellpublishing.com
XXX

Original Article

ALLEGIANCE IN PSYCHOTHERAPY OUTCOME RESEARCH • LEYKIN & DERUBEIS
CLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE • V16 N1, MARCH 2009

Allegiance in Psychotherapy Outcome Research: Separating
Association From Bias

Yan Leykin, University of California, San Francisco
Robert J. DeRubeis, University of Pennsylvania

Concern about the contamination of psychotherapy
outcome studies by “allegiance bias”—distortion of
findings because of investigators’ preferences—has led
to the proposal that findings to date should not be used
to make inferences about the relative efficacies of
psychotherapies. It has also been proposed that results
from all such studies should be adjusted to cancel the
presumed distorting effects of allegiances. We argue
that although much effort has been devoted towards
establishing the existence of statistical associations between
allegiances and outcomes, the causal implication—that
investigators’ allegiances influence results—has gone
virtually untested. We present a new vocabulary with
the aim of sharpening the allegiance discourse, and we
propose that research strategies markedly different from
the ones used to date are needed to address some of
the more serious limitations of allegiance bias research.

https://psychology.sas.upenn.edu/system/files/Leykin CPSP 2009 Allegiance.pdf
 

Dolphin

Senior Member
Messages
17,567
This paper is now free at: http://www.researchgate.net/publica...erview_of_reviews/file/3deec516856fa17ff9.pdf

Clin Psychol Rev. 2013 Jun;33(4):501-11. doi: 10.1016/j.cpr.2013.02.002. Epub 2013 Feb 21.

Researcher allegiance in psychotherapy outcome research: an overview of reviews.

Munder T, Brütsch O, Leonhart R, Gerger H, Barth J.

Source

Institute of Social and Preventive Medicine, University of Bern, Switzerland. tmunder@uni-kassel.de

Abstract

Researcher allegiance (RA) is widely discussed as a risk of bias in psychotherapy outcome research. The relevance attached to RA bias is related to meta-analyses demonstrating an association of RA with treatment effects. However, recent meta-analyses have yielded mixed results. To provide more clarity on the magnitude and robustness of the RA-outcome association this article reports on a meta-meta-analysis summarizing all available meta-analytic estimates of the RA-outcome association. Random-effects methods were used. Primary study overlap was controlled. Thirty meta-analyses were included. The mean RA-outcome association was r=.262 (p=.002, I(2)=28.98%), corresponding to a moderate effect size. The RA-outcome association was robust across several moderating variables including characteristics of treatment, population, and the type of RA assessment. Allegiance towards the RA bias hypothesis moderated the RA-outcome association. The findings of this meta-meta-analysis suggest that the RA-outcome association is substantial and robust. Implications for psychotherapy outcome research are discussed.

Copyright © 2013 Elsevier Ltd. All rights reserved.

PMID: 23500154 [PubMed - indexed for MEDLINE]
 

Dolphin

Senior Member
Messages
17,567
(from Munder et al., 2013)

RA=Researcher allegiance

This, from earlier in the paper, explains AAB
Allegiance to the RA bias hypothesis (AAB): To capture secondorder researcher allegiance we a priori devised a scale measuring whether meta-analysts were likely to have a positive AAB or not. Not having a positive AAB might include neutral or even negative attitudes towards the RA bias hypothesis (e.g., RA is regarded as an artifact of true efficacy differences between treatments). Positive AAB was regarded as being present if one of the authors had previously published on RA (i.e., a self-citation was used in the description of RA) or if RA was portrayed in the discussion of article as a definitive biasing factor.
It's not too important to understand this for most of the stuff. But basically, this is a review of reviews so one thing they looked at whether the reviewers might have a bias.


4. Discussion

RA has been controversially discussed as a potential biasing factor in psychotherapy outcome research. Employing meta-meta-analysis, this overview of reviews investigated the magnitude and robustness of the RA–outcome association in psychotherapy outcome studies. We found that across n=30 meta-analyses the RA–outcome association was r=.262, corresponding to a moderate effect size. The heterogeneity between meta-analyses was low and nonsignificant, even though the meta-analyses included investigated a wide range of different treatments for a wide range of different clinical problems in adults, children, or both. The diversity of the meta-analyses included points to the generality of the RA-outcome association in different fields of psychotherapy outcome research. Translating the RA-outcome association into a d-effect size resulted in an expected relative effect of Δd=0.54 for a study comparing a treatment preferred by investigators to a non-preferred treatment. The relevance of this figure is clear when put in the context of the relative effects typically found in metaanalyses comparing different bona fide psychotherapies. In a metaanalysis of k=295 relative effects from direct comparisons of different bona fide psychotherapies for various clinical problems, Wampold et al. (1997) reported a small effect (Δd≤0.19). Recently, Tolin (2010) reported a relative effect of Δd=0.22 across k=32 direct comparisons of CBT versus other bona fide psychotherapies for various mental disorders. Thus, relative treatment effects tend to be smaller than the expected difference between treatments with and without RA (Δd=0.54). This clearly indicates that differences in RA can threaten the validity of findings from comparative outcome studies.

The RA-outcome association was found to be robust across several moderating variables. The RA-outcome association was similar in meta-analyses restricted to individual therapy and meta-analyses with mixed or other treatment settings, in meta-analyses with defined populations compared to meta-analyses with mixed populations, and in meta-analyses with adults compared to children. No significant moderator effects emerged for features of primary studies (inclusion of nonrandomized studies, study design, type of outcome measure and type of effect size), or for meta-analyses with or without weighting for primary study sample size. Interestingly, the RA-outcome association did not depend on the type of RA measure employed by metaanalyses (source of RA assessment, RA indicators used). One previous meta-analysis (Gaffan et al., 1995) found a substantial RA-outcome association for older studies on cognitive therapy for depression but not in more recent ones, suggesting that the RA-outcome association might be a historical phenomenon and thus decrease over time. We found no association between the publication date of meta-analyses and the magnitude of the RA-outcome association, suggesting that the RA-outcome association remained stable over the decades.

The positive allegiance of meta-analysts to the RA bias hypothesis was found to be a moderator of the RA-outcome association in metaanalyses, a finding supported in simultaneous moderator analysis taking into account potential confounds. Meta-analyses coded as having a positive AAB reported stronger RA-outcome associations. However, the moderator effect does not indicate that the substantial RA-outcome association found in the present study is an artifact of RA itself: First, the RA-outcome association found in meta-analyses with no positive AAB was significant and substantial. Transferring the respective correlation into the expected result of a trial comparing a treatment with RA to a treatment without RA yields a small to moderate relative effect of Δd=0.36. This rather conservative estimate of the RA-outcome association is still larger than common estimates of relative treatment effects (see above). Second, while meta-analyses with positive AAB might provide a liberal estimate of the RA-outcome association, meta-analyses with no positive AAB might provide a conservative estimate as a consequence of some meta-analysts having a negative allegiance towards the RA bias hypothesis.

Two plausible explanations for the moderating effect of AAB were suggested by correlations with other moderator variables: First, meta-analyses with positive AAB tended to focus on comparative studies instead of controlled studies. As comparative studies include direct comparisons of different psychotherapies they are particularly threatened by RA bias and thus meta-analyses of comparative studies might yield larger estimates of the RA-outcome association. However, in the present meta-meta-analysis the RA-outcome association was not significantly larger in comparative studies. Second, metaanalyses with positive AAB were more likely to use a blinded RA rating. This might indicate that meta-analyses with positive AAB used more elaborate methods to assess RA. Both explanations are tentative because they are not yet supported by data. Both explanations imply that estimates from meta-analyses with positive AAB are not biased. Of course, a biasing influence of researcher motivation on research operations is another important explanation
.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Continues from Munder et al., 2013

4.1. From association to bias

Reports of a substantial RA-outcome association have led many scholars to conclude that RA is a risk of bias in comparative outcome research (Lambert & Ogles, 2004; Luborsky et al., 1999; Wampold, 2001; Westen et al., 2004). This conclusion has been criticized for making causal claims based on correlational findings (Klein, 1999; Lambert, 1999; Leykin & DeRubeis, 2009; Weisz, Weiss, Han, Granger, & Morton, 1995). Moreover, critics have pointed out that causality might be reversed; that is, RA might be a reflection of true efficacy differences between treatments, gained through intense clinical and research involvement. If that supposition were true, finding a substantial RA-outcome association would not suggest bias. Also, the proposed changes to study design and meta-analyses to prevent RA bias would be unnecessary or even result in bias themselves, for example, in the case of statistically correcting metaanalytic findings for differences in RA (cf. Weisz et al., 1995).

Although the criticism of the RA bias hypothesis is conceptually sound, the RA bias hypothesis is supported by empirical evidence: One of the included meta-analyses (Munder et al., 2011) has supported the assumption that RA bias is transmitted by within-study processes (Luborsky et al., 1999; Miller et al., 2008). Based on k=79 direct comparisons of different psychotherapies for posttraumatic stress disorder and depression, Munder and colleagues found the RA-outcome association to be significantly lower in treatment comparisons with high internal validity. Thus, differences in RA were more predictive of outcome in the presence of deficits in the experimental control, suggesting that RA bias is mediated by within-study processes favoring the preferred treatment.

Conversely, there is evidence that true efficacy differences between treatments are not primarily responsible for the RA-outcome association. This has been tested in a meta-analysis (Munder, Flückiger, Gerger, Wampold, & Barth, 2012) restricted to k=29 direct comparisons of different versions of trauma-focused therapy for posttraumatic stress disorder, a set of treatments widely found to be equally effective. With true efficacy differences being eliminated, the true efficacy hypothesis (i.e., that RA is a reflection of true efficacy differences) expected the RA-outcome association to be zero. However, a significant RA-outcome association corresponding to r=.35 was found, suggesting that true efficacy differences are not sufficient to explain the RA outcome association. Further evidence against the true efficacy hypothesis is also available from one of the early meta-analyses assessing RA (Berman et al., 1985). Among k=25 direct comparisons of cognitive therapy and systematic desensitization for anxiety, RA towards cognitive therapy was found in k=14 comparisons and towards systematic desensitization in k=6 comparisons. Both treatments were superior to the comparator in studies conducted by investigators with allegiance towards the respective treatment. However, overall, no substantial efficacy differences were found. This result is inconsistent with the assumption that RA is a reflection of true efficacy differences.

A final argument for taking RA bias seriously follows the Campbellian methodological principle to rule out threats to validity in empirical studies (Shadish, Cook, & Campbell, 2002). Validity is threatened when a plausible alternative explanation for the results of a study can be provided. Plausible validity threats are “identified through a process that is partly conceptual and partly empirical” (Shadish et al., 2002, p. 39). Validity is defended when plausible validity threats have been ruled out, for example by implementing methodological safeguards in the design of the study. This logic has two important implications for the discussion of RA bias: First, as long as the RA-outcome association is not shown to be an artifact (i.e., RA does not causally influence outcome), it remains a plausible explanation for psychotherapy outcome studies. Second, the validity of psychotherapy outcome studies should be defended by implementing safeguards in the design, as has been done in studies carried out in a collaboration of researchers with different allegiances (e.g., Elkin et al., 1989; Leichsenring et al., 2009).

5. Conclusion

This meta-meta-analysis found a robust and substantial RA-outcome association across diverse settings. In concert with evidence showing that RA influences outcome, the findings of this meta-meta-analysis suggest that RA poses a rival explanation to the results of psychotherapy outcome studies and the meta-analyses synthesizing these studies. Therefore, researchers conducting psychotherapy outcome studies or meta-analyses should aim at preventing RA bias by implementing the remedies discussed in the literature. These include the recommendation that comparative studies should be conducted collaboratively by teams with mixed allegiances (Leykin & DeRubeis, 2009; Luborsky et al., 1999), and that study therapists in all treatment conditions should be motivated to learn and deliver their respective treatments (Luborsky et al., 1999; Miller et al., 2008; Munder et al., 2011). To defend the validity of their conclusions, meta-analyses on the efficacy of treatments should include a consideration of RA as a potential rival explanation of their findings.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Researcher allegiance is not something that has been discussed very much in the ME/CFS literature.

One exception is with the PACE Trial, where the authors wrote in a comment on the protocol:

Beliefs and expectations of treatment and who is running the trial

The trial has been designed and is being managed by many different healthcare and research professionals, including doctors, therapists, health economists, statisticians and a representative of a patient charity. The Trial Management Group includes five physicians and four psychiatrists. To measure any bias consequent upon individual expectations, all staff involved in the PACE trial recorded their expectations as to which intervention would be most efficacious before their participation, and we will publish these data after the end of the trial.

[1] White PD, Sharpe MC, Chalder T, DeCesare JC, R Walwyn R, for the PACE trial management group: Response to comments on "Protocol for the PACE trial" http://www.biomedcentral.com/1471-2377/7/6/comments#306608

The statistical plan also mentioned the collection of such data:


Baseline staff expectations regarding the outcome of the trial were recorded.
[2]. Walwyn R, Potts L, McCrone P, Johnson AL, Decesare JC, Baber H, Goldsmith K, Sharpe M, Chalder T, White PD. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials. 2013 Nov 13;14:386. doi: 10.1186/1745-6215-14-386.
 

Dolphin

Senior Member
Messages
17,567
This is now available for free:

Put: "An investigation of self-assessment bias in mental health providers" into Google Scholar http://scholar.google.com/ for the link (one gets a temporary link so there is no point me posting it).

An investigation of self-assessment bias in mental health providers.

Walfish S, McAlister B, O'Donnell P, Lambert MJ.
Source

Department of Psychology, Brigham Young University, 272 Taylor Building, Provo, UT 84602, USA.
Abstract

Previous research has consistently found self-assessment bias (an overly positive assessment of personal performance) to be present in a wide variety of work situations. The present investigation extended this area of research with a multi-disciplinary sample of mental health professionals. Respondents were asked to: (a) compare their own overall clinical skills and performance to others in their profession, and (b) indicate the percentage of their clients who improved, remained the same, or deteriorated as a result of treatment with them. Results indicated that 25% of mental health professionals viewed their skill to be at the 90th percentile when compared to their peers, and none viewed themselves as below average. Further, when compared to the published literature, clinicians tended to overestimate their rates of client improvement and underestimate their rates of client deterioration. The implications of this self-assessment bias for improvement of psychotherapy outcomes are discussed.
 
Last edited: