• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

2012: Bias in peer review: Carole J. Lee1, Cassidy R. Sugimoto2, Guo Zhang2, Blaise

Esther12

Senior Member
Messages
13,774
Open Access:http://onlinelibrary.wiley.com/doi/10.1002/asi.22784/full

Abstract

Research on bias in peer review examines scholarly communication and funding processes to assess the epistemic and social legitimacy of the mechanisms by which knowledge communities vet and self-regulate their work. Despite vocal concerns, a closer look at the empirical and methodological limitations of research on bias raises questions about the existence and extent of many hypothesized forms of bias. In addition, the notion of bias is predicated on an implicit ideal that, once articulated, raises questions about the normative implications of research on bias in peer review. This review provides a brief description of the function, history, and scope of peer review; articulates and critiques the conception of bias unifying research on bias in peer review; characterizes and examines the empirical, methodological, and normative claims of bias in peer review research; and assesses possible alternatives to the status quo. We close by identifying ways to expand conceptions and studies of bias to contend with the complexity of social interactions among actors involved directly and indirectly in peer review.

Actually - just realised how long this paper is. I'm going to have to come back to this when I'm feeling fresher.
 

Esther12

Senior Member
Messages
13,774
I pulled out the bits of most interest. Personally, I didn't think it was worth reading, but for those interested in this area, having a skim through the highlights might be a fun-fun-fun.

They sum up the situation & possible problems here (probably not news to people already interested in this area:

In many ideal depictions, peer review processes are understood as providing “a system of institutionalized vigilance” (Merton, 1973, p. 339) in the self-regulation of knowledge communities. Peer expertise is coordinated to vet the quality and feasibility of submitted work. Authors, in the anticipation of the peer evaluation of their work, aim to conform to shared standards of excellence out of expediency and in accordance with an internalized ethos (Merton, 1973). The norms and values to which peers hold each other are conceived as being universally and consistently applied to all members, where these norms and values pertain to the content of authors' evidence and arguments independently of their social caste or positional authority (Merton, 1973). When these norms and values are impartially interpreted and applied, peer evaluations are understood as being fair. It is the impartial interpretation and application of shared norms and standards that make for a fair process, which—psychologically (Tyler, 2006) and epistemologically—legitimizes peer review outcomes, content, and institutions.
This is why critics' charge of bias in peer review is so troubling: Threats to the impartiality of review appear to threaten peer review's psychological and epistemic legitimacy. Although there are a few exceptions (Lamont, 2009; Lee, in press; Mallard, Lamont, & Guetzkow, 2009), variations in the interpretation and application of epistemic norms and values are almost always conceived of as problematic. Failures in impartiality lead to outcomes that result from the “luck of the reviewer draw” (Cole, Cole, & Simon, 1981, p. 885), fail to uphold the meritocratic image of knowledge communities (Lee & Schunn, 2011; Merton, 1973), protect orthodox theories and approaches (Travis & Collins, 1991), insulate “old boy” networks (Gillespie, Chubin, & Kurzon, 1985; McCullough, 1989), encourage authors to “chase” disputable standards (Ioannidis, 2005, p. 696), and mask bad faith efforts by reviewers who also serve as competitors (Campanario & Acedo, 2005). Perceived partiality leads to dissatisfaction among those whose professional success or failure is determined by review outcomes (Gillespie, Chubin, & Kurzon, 1985; McCullough, 1989; Ware & Monkman, 2008).
The charge of bias also threatens the social legitimacy of peer review. Peer review signals to the body politic that the world of science and scholarship takes seriously its social responsibilities as a self-regulating, normatively driven community. The enormity and complexity of contemporary science and its ramified institutional arrangements are such that peer review has, in the words of Biagioli (2002, p. 34), been “elevated to a ‘principle’ — a unifying principle for a remarkably fragmented field.” As a consequence, the system is held to almost impossibly strict standards and routinely exposed to intense scrutiny by insiders and outsiders alike, including elected politicians (Gustafson, 1975; Walsh, 1975).

(This article could have been more concisely written imo, *grumble*)

The go through lots of different potential forms of bias, with this one being likely to be the most relevant to people here imo:

Content-Based Bias

Content-based bias involves partiality for or against a submission by virtue of the content (e.g., methods, theoretical orientation, results) of the work.2 Since different types of content-based bias challenge the thesis of impartiality in different ways, we will save analysis of these challenges to discussion of the subtypes. Content-based bias is primarily studied in the context of scientific disciplines. This is because the overarching concern motivating research on content-based bias is whether peer review is capable of the kind of self-regulation that encourages scientific progress and the achievement of other scientific goals. Most studies attempt to demonstrate content-based bias by showing that review outcomes vary as a function of the submission's content. However, when such studies are not available, surveys or anecdotal evidence from researchers or grant program managers are appealed to instead.
Many hypothesize that reviewers will evaluate more favorably the submissions of authors who belong to similar “schools of thought,” a form of “cognitive cronyism” (Travis & Collins, 1991, p. 323). The perception that cognitive cronyism is at play in peer review contexts is evidenced by conversations among grant committee members at the U.K. Science and Engineering Research Council, which reveal attempts to contextualize reviewer recommendations by identifying theoretical and subdisciplinary affiliations between reviewers and proposal authors (Travis & Collins, 1991). Sandström (2009) operationalized cognitive cronyism in reviews by examining the relationships between key noun phrases appearing in the titles and abstracts of papers being reviewed and papers written by the reviewers, hypothesizing that reviewers would favor work that was similar to their own. The data did not support the hypothesis.
At what point does cognitive difference become discrimination? Travis and Collins (1991) contrast cognitive cronyism with bias based on social status. For Travis and Collins, cognitive cronyism is not pernicious like social status bias so long as the boundaries of cognitive communities and social hierarchies do not coincide. However, in cases where they do coincide, outsiders may find “old-boy networks” that control journal and conference content (Hull, 1988, p. 156) and citation networks (Ferber, 1986) difficult to penetrate for social reasons disguised as purely cognitive ones (Lee & Schunn, 2011).
If reviewers prefer research that is similar in cognitive orientation and content to their own, then we would expect that, on the whole, reviewers disfavor research inconsistent with their theoretical orientation as well as research falling outside the mainstream, including interdisciplinary and transformative research.

Another couple:

Confirmation bias

In the psychological literature, confirmation bias is the tendency to gather, interpret, and remember evidence in ways that affirm rather than challenge one's already held beliefs (Nickerson, 1998). Historical and philosophical analyses have demonstrated the obstructive and constructive role that confirmation bias has played in the course of scientific inquiry, theorizing, and debate (Greenwald, Pratkanis, Leippe, & Baumgardner, 1986; Solomon, 2001). In the context of peer review, confirmation bias is understood as reviewer bias against manuscripts describing results inconsistent with the theoretical perspective of the reviewer (Jelicic & Merckelbach, 2002). As such, confirmation bias can also be classified as a type of bias that varies as a function of reviewer characteristics. Confirmation bias challenges the impartiality of peer review by questioning whether reviewers evaluate submissions on the basis of their content and relationship to the literature, independently of their own theoretical/methodological preferences and commitments. Confirmation bias also challenges the impartiality of scientists qua scientists by questioning their ability to evaluate scientific hypotheses on the basis of the evidence independently of their “desires, value perspectives, cultural and institutional norms and presuppositions, expedient alliances and their interests” (Lacey, 1999, p. 6).
Empirical study suggests reviewers are vulnerable to confirmation bias. Ernst, Resch, and Uher (1992) found that referees who had published work in favor of a controversial clinical intervention judged a manuscript whose data supported the use of that intervention more favorably than those who had published work against it. Confirmation bias for or against manuscripts may be rooted in biased assessments along more specific dimensions of evaluation. For example, Mahoney (1977) found that reviewers judged the methodological soundness, data presentation, scientific contribution, and publishability of a manuscript to be of higher quality when its data were consistent with the reviewer's theoretical orientation. However, consistency between a reviewer's theoretical orientation and a manuscript's reported results does not automatically lead to confirmation bias. Hull's (1988) analysis of reviewer recommendations for Systematic Zoology demonstrates that, during a time of warring schools of taxonomy, confirmation bias among reviewers was “far from total” (p. 333) since allies can disagree on fundamental tenets and wish to prevent the publication of weak papers that could become easy targets for rivals.

The spell Wessely wrong here... oh-oh, harassment!

Conservatism

Peer review is often censured for its conservativism, that is, bias against groundbreaking and innovative research (Braben, 2004; Chubin & Hackett, 1990; Wesseley, 1998). Conservativism violates the impartiality of peer review by suggesting that reviewers do not interpret and apply evaluative criteria in identical ways since what count as the proper criteria of evaluation—and their relative weightings—are disputed. Although some challenge the suggestion that conservativism is epistemically problematic (Shatz, 2004), most argue that conservativism threatens scientific progress by stifling the funding and public articulation of alternative and revolutionary scientific theories (Stanford, 2012). More locally, conservativism violates explicit mandates, articulated by journals and granting institutions, to fund and publish innovative research (Frank, 1996; Horrobin, 1990; Luukkonen, 2012).
Many have voiced concern about conservativism in peer review, including past directors at the NSF and NIH (Carter, 1979; Kolata, 2009) and applicants to these institutions (Gillespie et al., 1985, p. 49; McCullough 1989, p. 83). Research suggests that authors proposing unorthodox as opposed to orthodox claims must meet a higher burden of proof: Resch, Ernst, and Garrow (2000) demonstrated that studies supporting unorthodox medical treatments were rated less highly even though the supporting data were equally strong. Qualitative research reveals another possible source for conservativism: for many grant panelists, “frontier” research is understood as “paradigm-shifting” and “revolutionary” (Luukkonen, 2012, p. 54), while “excellent” research is understood as involving “methodological rigour and solid quality of the research” (Luukkonen, 2012, p. 54). Because of the uncertainty surrounding the pursuit of novel methods and theories—and the need for multiple contingency plans should a new experiment or project not go as planned—it may be more difficult for frontier research to appear excellent qua methodologically rigorous or solid.
There is a paucity of quantitative work on whether and where conservativism arises in peer review. This gap indicates a crucial area for future research—one facing methodological and conceptual challenges. Since all manuscripts and grant proposals aim to be novel in some respect, studies on conservativism must find ways to measure degrees of novelty and/or parse out how different types of novelty (e.g., in methods, theory, application context, research question, or statistical analyses) impact peer evaluations.

They discuss different types of peer review.

I'm totally in favour of open peer review, and more of an emphasis on post-publication criticism. The arguments against this seem to reveal to me how deeply flawed our systems are:

Despite the potential advantages of open peer review, researchers and scholars seem somewhat reticent to adopt it. In Ware and Monkman's (2008) survey, only 13% preferred open review to other models and only 27% thought it could be an effective form of review compared with 17% in the Melero and Lopez-Santovena (2001) study. Nearly half of all Ware and Monkman's (2008) respondents said that open peer review would make them less likely to review. Other studies have noted that disclosing the reviewer's name would act as a disincentive and lead to a decline in the potential pool of willing reviewers (Baggs et al., 2008; van Rooyen, Delamothe, & Evans, 2010). Some scholars note that reviewer anonymity protects the social cohesion of research groups by allowing same-group reviewers to “play down their areas of disagreement” in public (Hull, 1988, p. 334). More generally, scholars feel that “anonymity protects younger, less powerful reviewers from possible retribution on the part of the rejected author” (Peters & Ceci, 1982b, p. 251).

Here's the conclusion:

Conclusions and Future Research

Impartiality ensures both the consistency and meritocracy of peer review. Research on bias in peer review—predicated on the ideal of impartiality—raises not just local hypotheses about specific sources of partiality, but much broader questions about whether the processes by which knowledge communities regulate themselves are epistemically and socially corrupt. Contra impartiality, the evidence suggests that peer evaluations vary as a function of author nationality and prestige of institutional affiliation; reviewer nationality, gender, and discipline; author affiliation with reviewers; reviewer agreement with submission hypotheses (confirmation bias); and submission demonstration of positive outcomes (publication bias).
However, a closer look at the empirical and methodological limitations of research on bias raises questions about the existence, extent, and normative status of many hypothesized forms of bias. Psychometrically oriented research is predicated on the questionable assumption that disagreement among reviewers is not normatively appropriate or desirable. Research on bias as a function of author characteristics adopts the untested assumption that authors belonging to different social categories submit manuscripts and grant proposals of comparable quality. Despite vocal concerns about conservativism in science, there is no empirical evidence (beyond anecdote) to buttress or belie such worries. And the evidence for bias against interdisciplinary research is mixed, as is the evidence for bias against female authors and authors living in non-English-speaking countries.
Research on bias in peer review also suggests that peer review is social in ways that go beyond the social categories to which authors and reviewers belong: Relationships between individuals in the process impact outcomes (e.g., affiliation bias), and individuals make decisions conditioned on beliefs about what others value (e.g., publication bias). Future research might usefully investigate these complex and dynamic social relations. Consider, for example, how the editor's relationships and beliefs about other actors may have an impact on his/her decisions. On the basis of previous experience with reviewers, the editor may differentially value and preferentially assign reviewers to manuscripts, which may alter final recommendations. Frequent or highly sought authors to the journal may develop a privileged relationship with the editor and with potential reviewers. Editors may feel peer pressure when evaluating manuscripts submitted by frequent reviewers and editorial board members (Lipworth, Kerridge, Carter, & Little, 2011). The readership may function as an invisible hand in the selection of authors and manuscript content, since the editor will need to be cognizant of the needs and wants of the marketplace. An editor may also be influenced by her/his relationship with the editorial board and/or publisher (commercial, academic, or society). The editor's strategy or vision for the journal may have a bearing on which manuscripts are reviewed and ultimately accepted for publication. As Chubin and Hackett (1990, p. 92) note, “[t]he journal editor occupies a delicate position between the author and reviewers, alternating among the roles of wordsmith and gatekeeper, caretaker and networker, literary agent and judge.”
Not all of these sources of social influence impact peer review in problematic ways. For example, the ways in which authors, reviewers, and editors anticipate each others' scrutiny and judgment may serve to improve the quality of each of their contributions (Bailar, 1991; Hirschauer, 2009), and editors' personal connections allow them to learn about and capture high-impact papers for publication (Laband & Piette, 1994). These examples suggest that the sociality of peer review can be structured “to enrich, rather than threaten” the well-being of peer review (Lipworth et al., 2011, p. 1,056). A natural direction for future research includes articulating and assessing alternative normative models that acknowledge reviewer partiality, with a focus on the epistemic and cultural bases for reviewer disagreement; the ways editors and grant program managers anticipate, capitalize on, and manage reviewer disagreement; and the ways publication venues and funding opportunities should be structured to accommodate reviewer differences (Hargens & Herting, 1990; Lee, ). Finally, the inescapable sociality and partiality of peer evaluation raise questions about whether impartiality can or should be upheld as the ideal for peer review.
 

Shell

Senior Member
Messages
477
Location
England
I think one area of peer-review that makes it untenable is power structures. By calling it "peer" review we are supposed to believe that those who review research, studies and even articles are on the same power level as those they review. This isn't often the case. People with powerful politics and of course money behind them are able to ensure that criticism of their work either doesn't happen or doesn't get published or are ridiculed.
The standards for publication seem pretty low as well. Whatever happened to never making assertions without data to back them?
I've had the impression that so-called peers reviewing research is just an exercise in ticking a mate's box as well as possible fears of bucking the system.
There's also a truly 'orrible trend of rewarding failure. I used to call it promoting them out of the way, but in some cases there seems to be a deliberate promotion of those who can lie with impunity.
While this is happening science is dead in the water.