• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodo

Esther12

Senior Member
Messages
13,774
http://www.systematicreviewsjournal.com/content/pdf/2046-4053-2-79.pdf

I was just reading this paper, and some bits stood out to me as possibly interesting and relevant to CFS, so I thought that I would pull them out.

Some of the history was interesting, as I'd assumed people were sceptical of the processes of science far further back than this... maybe it took time for anyone involved in science to get around to gathering evidence of these problems?:


The 1980s also saw initial evidence of the presence of what is now referred to as selective outcome reporting [16] and research investigating the influence of source of funding on study results [10,11,17,18]


I know nothing about this stuff, so it's all quite interesting:


In 2001, the Cochrane Reporting Bias Methods Group now known as the Cochrane Bias Methods Group, was established to investigate how reporting and other b iases influence the results of primary studies. The most substantial development in bias assessment practice within the Collaboration was the introduction of the Cochrane Risk of Bias (RoB) Tool in 2008. The tool was developed based on the methodological contributions of meta-epidemiological studies [26,27] and has since been evaluated and updated [28], and integrated into Grading of Recommendations Assessment, Development and Evaluation (GRADE) [29].




Some quite nice summaries of important issues, and possibly worthwhile references:


Blinding of participants, personnel and outcome assessment
The concept of the placebo effect has been considered since the mid-1950s [47] and the importance of blinding trial interventions to participants has been well known, with the first empirical evidence published in the early 1980s [48].The body of empirical evidence on the influence on blinding has grown since the mid-1990s, especially in the last decade, with some evidence highlighting that blinding is important for several reasons [49]. Currently, the Cochrane risk of bias tool suggests blinding of participants and personnel, and blinding of outcome assessment be assessed separately. Moreover consideration should be given to the type of outcome (i.e. objective or subjective outcome) when assessing bias, as evidence suggests that subjective outcomes are more prone to bias due to lac
k of blinding [42,44] As yet there is no empirical evidence of bias due to lack of blinding of participants and study personnel. However, there is evidence for studies described as ‘blind’ or ‘double-blind’, which usually includes blinding of one or both of these groups of people. In empirical studies, lack of blinding in randomized trials has been shown to be associated with more exaggerated estimated intervention effects [42,46,50].
Different people can be blinded in a clinical trial [51,52]. Study reports often describe linding in broad terms, such as ‘double blind’. This term makes it imposs
ible to know who was blinded [53]. Such terms are also used very inconsistently [52,54,55] and the frequency of explicit reporting of the blinding status of study participants and personnel remains low even in trials published in top journals [56], despite explicit recommendations. Blinding of the outcome assessor is particularly important, both because the mechanism of bias is simple and foreseeable, and because evidence for bias is unusually clear [57]. A review of methods used for blinding highlights the variety of methods used in practice [58]. More research is ongoing within the Collaboration to consider the best way to consider the influence of lack of blinding within primary studies. Similar to selection bias, performance and detection bias are both mandatory components of risk of bias assessment in accordance with the MECIR standards.








Publication bias


The last two decades have seen a large body of evidence of the presence of publication bias [60-63] and why authors fail to publish [64,65]. Given that it has long been recognized that investigators frequently fail to report their research findings [66], many more recent papers have been geared towards methods of detecting and estimating the effect of publication bias.
An array of methods to test for publication bias and additional recomm
endations are now available [38,43,67-76], many of which have been evaluated [77-80]. Automatic generation of funnel plots have been incorporated when producing a Cochrane review and software (RevMan) and are encouraged for outcomes with more than ten studies [43]. A thorough overview of methods is included in Chapter 10 of the Cochrane Handbook for Systematic Reviews of Interventions [81].
Selective outcome reporting
While the concept of publication bias has been well established, studi
es reporting evidence of the existence of selective reporting of outcomes in trial reports have appeared more recently [39,41,82-87]. In addition, some studies have investigated why some outcomes are omitted from published reports [41,88-90] as well as the impact of omission of outcomes on the findings of meta-analyses [91]. More recently, methods for evaluating selective reporting, namely, the ORBIT (Outcome Reporting Bias in Trials) classification system have been
developed. One attempt to mitigate selective reporting is to develop field specific core outcome measures [92] the work of COMET (Core Outcome Measures in Effectiveness Trials) initiative [93] is supported by many members within the
Cochrane Collaboration. More research is being conducted with regards to selective reporting of outcomes and selective reporting of trial analyses, within this concept there is much overlap with the movement to improve primary study reports, protocol development and trial registration


Apparently some evidence that assessors of bias can be biased by reputation of institution, researchers, etc:


Evidence on how to conduct risk of bias assessments
Often overlooked are the processes behind how systematic evaluations or
assessments are conducted. In addition to empirical evidence of specific sources of bias, other methodological studies have led to changes in the processes used to assess risk of bias. One influential study published in 1999 highlighted the hazards of scoring ‘quality’ of clinical trials when conducting meta-analysis and is one of reasons why each bias is assessed separately as ‘high’, ‘low’ or ‘unclear’ risk rather than using a combined score [22,94]. Prior work investigated blinding of readers, data analysts and manuscript writers [51,95].More recently, work has been completed to assess blinding of authorship and institutions in primary studies when conducting risk of bias assessments, suggesting that there is
discordance in results between blind and unblinded RoB assessments. However uncertainty over best practice remains due to time and resources needed to implement blinding [96].


They talk about CONSORT being an important move forward, which is something I really don't know enough about.

I don't know about EQUATOR either, but it sounds of interest:


Issues of poor reporting extend far beyond randomized trials, and many groups have developed guidance to aid reporting of other study types. The EQUATOR Network’s library for health research reporting includes more than 200 reporting guidelines [99]. Despite evidence that the quality of reporting has improved over time, systemic issues with the clarity and transparency of reporting remain [100,101]. Such inadequacies in primary study reporting result in systematic review authors’ inability to assess the presence and extent of bias in primary studies and the possible impact on review results, continued improvements in trial reporting are needed to lead to more informed risk of bias a
ssessments in systematic reviews.



[ http://www.equator-network.org/ There is a copy of the CONSORT guidelines their, but it looked like the site might be difficult to browse unless you knew what you were looking for. (Having said that, I've now found a few interesting bits in their 'news' section).]

A bit on registration:


Trial registration
During the 1980s and 1990s there were several calls to mitigate publ
ication bias and selective reporting via trial registration [102-104]. After some resistance, in 2004, the BMJ and The Lancet reported that they would only publish registered clinical trials [105] with the International Committee of Medical Journal Editors making a statement to the same effect[40]. Despite the substantial impact of trial registration [106] uptake is still not optimal and it is not mandatory for all trials. A recent report indicated that only 22% of trials mandated by
the FDA were reporting trial results on clinicaltrials.gov [107]. One study suggested that despite trial registration being strongly encouraged and even mandated in some jurisdictions only 45.5% of a sample of 323 trials were adequately registered [108].



A bit OT, but this bit reminded me of Jonathan Edward's reporting that he thought multi-centre RCTs were often less effective as it was harder to keep high standards for patient selection, while here it is argued that this differences is likely to indicate a reduction in bias with multi-centre RCTs:


In addition, recent meta-epidemiological studies of binary and continuous
outcomes showed that treatment effect estimates in single-centre RCTs we
re significantly larger than in multicenter RCTs even after controlling for sample size [123,124].

Conclusion
To summarise, there has been much research conducted to develop under
standing of bias in trials and how these biases could influence the results of systematic reviews. Much of this work has been conducted since the Cochrane Collaboration was established either as a direct initiative of the Collaboration or thanks to the work of many affiliated individuals. There has been clear advancement in mandatory processes for assessing bias in Cochrane reviews.
These processes, based on a growing body of empirical evidence have aimed to improve the overall quality of the systematic review literature, however, many areas of bias remain unexplored and as the evidence evolves, the processes used to assess and interpret biases and review results will also need to adapt.

Yay, it was only 9 pages long, as it's got 10 pages of references - I feel so free!
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
A bit OT, but this bit reminded me of Jonathan Edward's reporting that he thought multi-centre RCTs were often less effective as it was harder to keep high standards for patient selection, while here it is argued that this differences is likely to indicate a reduction in bias with multi-centre RCTs:

In addition, recent meta-epidemiological studies of binary and continuous
outcomes showed that treatment effect estimates in single-centre RCTs we
re significantly larger than in multicenter RCTs even after controlling for sample size [123,124].

Either explanation would produce the same effect.

If inappropriate patients are being recruited because some institutions are being used which don't really understand the nuances of the disease (e.g. they can't tell chronic fatigue from ME or from any other disease which they should have diagnosed instead) or of the treatment requirements (say it was RA but the treatment was expected to help only certain types of RA patients), the same treatment won't help. (could this called bias? and then it would be the same as what Cochrane is saying anyway?)

Or if a certain institution has a high "provider allegiance" and insufficient controls (e.g. unblinded assessment), it could be a design flaw introducing bias.
 

Esther12

Senior Member
Messages
13,774
Either explanation would produce the same effect.

If inappropriate patients are being recruited because some institutions are being used which don't really understand the nuances of the disease (e.g. they can't tell chronic fatigue from ME or from any other disease which they should have diagnosed instead) or of the treatment requirements (say it was RA but the treatment was expected to help only certain types of RA patients), the same treatment won't help. (could this be a form of bias?)

Or if a certain institution has a high "provider allegiance" and insufficient controls (e.g. unblinded assessment), it could be a design flaw introducing bias.

Yeah. I would have instinctively assumed that the Cochrane interpretation was right had I not earlier read Edward's comment. Which is not to say that the Cochrane interpretation is wrong - it could be that the effect Edward's is frustrated by is a result of his reduced ability to (unintentionally) bias results in multi-centre RCTs.

It just shows how important it is to keep trying to distance oneself from stories, and remember how little the evidence really shows. There are often a number of different interpretations for particular findings.
 

WillowJ

คภภเє ɠรค๓թєl
Messages
4,940
Location
WA, USA
I meant to say that if we consider inappropriate patient selection a type of bias (something which skews the results), then it's possible Edwards and Cochrane are saying the same thing. (I edited my post to make this more clear)

In other words, it's the multi-center design where other centers don't understand either the patients or the treatment or both, which introduces the bias in Edwards' version.

When things differ, it's not obvious which version is biased. The better result could be biased, or the worse result could be the biased version, depending on what factor is introducing the bias.

I guess we are saying the same thing!
 

Esther12

Senior Member
Messages
13,774
I could have misremembered this, but I'm pretty sure that they were saying that multicentre is more reliable, and I just only quoted a small section. To some extent, as a reflection of real world outcomes, this is almost certainly the case.