• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Nature editorial: the human desire to be right is a major problem for robust science

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Let’s think about cognitive bias : Nature News & Comment

broadcaster Jon Ronson said said:
Ever since I first learned about confirmation bias I’ve been seeing it everywhere.

Nature's latest on problems in research focus on the problem of being human.
One enemy of robust science is our humanity — our appetite for being right, and our tendency to find patterns in noise, to see supporting evidence for what we already believe is true, and to ignore the facts that do not fit.

There follows a series of recommendations, including crowdsourcing analysis (one dataset, lots of different teams, which produces more accurate, more nuanced and less biased results); blind analysis (prize to anyone who can explain this in lay terms) and preregistering analyis plans [and sticking to them!]. Worth a read.

There are several related articles:

The editorial points to the big data fields of genomics and proteomics that got their act together after early work 'was plagued by false positives':
That need is particularly acute in statistical data analysis, where some of the best-established methods were developed in a time before data sets were measured in terabytes, and where choices between techniques offer abundant opportunity for errors. Proteomics and genomics, for example, crunch millions of data points at once, over thousands of gene or protein variants. Early work was plagued by false positives, before the spread of techniques that could account for the myriad hypotheses that such a data-rich environment could generate.

...Finding the best ways to keep scientists from fooling themselves has so far been mainly an art form and an ideal. The time has come to make it a science. We need to see it everywhere.

Added: couple of interesting points from the crowdfunding analysis paper, which looks at what happened when 29 different teams analysed the same dataset: after discusssion, most of the approaches were deemed valid, though they produced a range of answers:

Crowdsourcing research can reveal how conclusions are contingent on analytical choices.

Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions.
 
Last edited:

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
It is not merely 'desire to be right'. It is the fact that careers depends on it (positive results and exaggeration).

Who is going to fund the guy/lady who has a history of null results? Who is going to get excited about studies that found no effect?

We need to change how science is funded, if we are to encourage less biased analyses.
 

snowathlete

Senior Member
Messages
5,374
Location
UK
It is not merely 'desire to be right'. It is the fact that careers depends on it (positive results and exaggeration).

Who is going to fund the guy/lady who has a history of null results? Who is going to get excited about studies that found no effect?

We need to change how science is funded, if we are to encourage less biased analyses.

I agree, there are a range of problems, as Simon highlighted, all true. But there is also the issue that people (including whole groups) are invested in certain outcomes as they impact income and status.

You see all the same things in other fields, history, etc.

Good that these things are being debated openly.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
It is not merely 'desire to be right'. It is the fact that careers depends on it (positive results and exaggeration).
In a broader context, this extends to the entire notion of authority and success in society. Its a social problem. Expert good, doubting person bad. I would like to quote Bertrand Russell:

bertrand-russell-quote-fools-wise-men-quote.jpg


Getting around this issue, in science, was in part what Critical Rationalism was intended for.

Getting around this issue, in general, was in part the purpose of Pancritical Rationalism.

Public relations, political and managerial techniques are often about creating the illusion of certainty.

Doctors like to convey an attitude of authority. I suspect that is why evidence based medicine has wide appeal, but evidence based practice is disliked.

PS Apparently the image link does not work for everyone, so here is the quote:

"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts. "
 
Last edited:

Antares in NYC

Senior Member
Messages
582
Location
USA
Let’s think about cognitive bias : Nature News & Comment
Nature's latest on problems in research focus on the problem of being human.
Excellent find, @Simon. Great article.

It does explain some of the controversies and lack of progress in research for diseases like ME/CFS and Lyme. It's frustrating to read the power dynamics in the world of scientific research, but glad that this problem is now in the spotlight.

Seems like we didn't learn much from the experience of Galileo. Just replace the church with the appropriate medical boards/scientific authorities.
 

Woolie

Senior Member
Messages
3,263
Yep, its a problem in research. But add an extra zero to that when you're talking bout medical diagnosis:

Confirmation Bias and Diagnostic Errors
Deborah Cowley, MD reviewing Mendel R et al. Psychol Med 2011 May 20.
Seeking disconfirmatory evidence can help to improve diagnostic accuracy.

Psychology studies have shown that people tend to confirm their preconceived ideas and ignore contradictory information, a phenomenon known as confirmation bias. To examine the role of confirmation bias in psychiatric diagnosis, researchers gave a case vignette to 75 psychiatrists (mean duration of professional experience, 6 years) and 75 fourth-year medical students. Participants were asked to choose a preliminary diagnosis of depression or Alzheimer disease and to recommend a treatment. The vignette was designed so that depression would seem the most appropriate diagnosis. Participants could then opt to view up to 12 items of narrative information; their “summary theses” were balanced between the two diagnoses, but the narratives overall favored the dementia diagnosis.

For the preliminary diagnosis, 97% of psychiatrists and 95% of students chose depression. Thirteen percent of psychiatrists and 25% of students chose to see more confirmatory than disconfirmatory items. In the end, 59% of psychiatrists and 64% of students reached the correct diagnosis of Alzheimer disease. Psychiatrists performing confirmatory searches were less experienced and more likely to make the wrong diagnosis (70% vs. 27% of those who sought disconfirming information). Participants were more likely to make the wrong final diagnosis if they chose to view six or fewer pieces of additional information. Making the wrong diagnosis affected treatment decisions.

- See more at: http://www.jwatch.org/jp20110613000...as-and-diagnostic-errors#sthash.XuC92akW.dpuf
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
It is well known that confirmatory strategies have a high error rate. Popper emphasized one way to address this in Critical Rationalism, its not enough to gather data that supports an hypothesis (though this is fine while developing an hypothesis) you have to create experiments that test the hypothesis. You have to actually go looking for disconfirmatory data.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts. "
It's a great quote!

But I don't think we should assume that We on the forum are universally wise and They are the fools; the thing is, we're all human, with our own biases, desire to be right and lack of interest in disconfirmatory evidence. The best we can hope for is to be more aware of those biases.

One of the interesting ideas in the crowdsource article I mentioned above is using independent teams as devil's advocates:
In analyses run by a single team, researchers take on multiple roles: as inventors who create ideas and hypotheses; as optimistic analysts who scrutinize the data in search of confirmation; and as devil's advocates who try different approaches to reveal flaws in the findings. The very team that invested time and effort in confirmation should subsequently try to make their hard-sought discovery disappear.

We propose an alternative set-up, in which the part of the devil's advocate is played by other research teams.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
One of the interesting ideas in the crowdsource article I mentioned above is using independent teams as devil's advocates:
It is indeed interesting, and its supposed to be, in part, what happens from peer criticism. However to do that requires the data be made public to other researchers. Data secrecy is the enemy of good science.

Nobody is really wise. Just more or less wise. Human nature, brain limitations, cultural limitations, the impossibility of knowing everything, guarantee nobody gets it right in all situations. What makes wisdom different, in my view, is that some recognize they and others have limitations and work to mitigate that. The opposite is those who are so convinced they are right they ignore all criticism.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
A bit more about another article in the Nature special issue

How scientists fool themselves – and how they can stop : Nature News & Comment

The first principle is that you must not fool yourself — and you are the easiest person to fool.
Richard Feynman.

“People forget that when we talk about the scientific method, we don't mean a finished product,” says Saul Perlmutter, an astrophysicist at the University of California, Berkeley. “Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.” So researchers are trying a variety of creative ways to debias data analysis — strategies that involve collaborating with academic rivals, getting papers accepted before the study has even been started and working with strategically faked data.



...There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says [psychologist and reproducibility researcher Brian] Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.

On the problems of big data with the risk of being able to mine any answer with enough searching:
social scientiest Andrew King said:
“I believe we are in the steroids era of social science,”
...Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston: “Our intuition when we start looking at 50, or hundreds of, variables sucks.”

Solutions discussed included this
One solution that is piquing interest revives an old tradition: explicitly considering competing hypotheses, and if possible working to develop experiments that can distinguish between them. This approach, called strong inference10, attacks hypothesis myopia head on. Furthermore, when scientists make themselves explicitly list alternative explanations for their observations, they can reduce their tendency to tell just-so stories

Another one is getting peer review - and acceptance - of studies based on the submitted methodology, before the work is done. After all, peer reviewers are supposed to focus on methodological flaws, not whether or not they like the findings:
it should keep peer reviewers from discounting a study's results or complaining after results are known. “People are evaluating methods without knowing whether they're going to find the results congenial or not,” he says. “It should create a much higher level of honesty among referees.”

Visual summary:


Reproducibility_graphic2.jpeg
 
Last edited:

Sean

Senior Member
Messages
7,378
Another one is getting peer review - and acceptance - of studies based on the submitted methodology, before the work is done. After all, peer reviewers are supposed to focus on methodological flaws, not whether or not they like the findings:
Sounds good. :thumbsup: