• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Bias in the ER: Doctors suffer from the same cognitive distortions as the rest of us

Messages
15,786
The point, once again, wasn’t that people were stupid. This particular rule they used to judge probabilities (the easier it is for me to retrieve from my memory, the more likely it is) often worked well. But if you presented people with situations in which the evidence they needed to judge them accurately was hard for them to retrieve from their memories, and misleading evidence came easily to mind, they made mistakes. “Consequently,” Amos and Danny wrote, “the use of the availability heuristic leads to systematic biases.” Human judgment was distorted by ... the memorable.
I think GPs are doing something similar all the time, primarily due to never having the time to get all of the evidence they need to make a good judgement.

If only 1 in 10,000 people ever have Problem X when presenting with Symptom Y, then any individual patient only has a 1 in 10,000 chance of having Problem X. Ergo every patient with Symptom Y can safely be discounted as having Problem X. By giving too much credence to the low odds, they create a situation where they inappropriately transform the odds of a patient having Problem X into being nonexistent.
 
Messages
2,391
Location
UK
Basically, a human's decision making is inherently weighted by the model of the world they hold in their head. The brain does not have the processing power to reassess everything every single time, so has evolved an extremely efficient way of basing decision making mostly on assumption. The drawback is new situations are heavily filtered in the light of past knowledge/experience; the world model seems invariably weak insofar as statistical reliability is concerned. It is why optical illusions work so well, and maybe the same for propaganda.

In my early days of engineering, I was mentored by an excellent engineer, and he had the very annoying habit of responding to most questions of mine with further questions. But I soon realised the worth of this, because what he was really investigating was whether I was actually asking the right question, and what my underlying reasoning was behind it. A question is invariably the result of a train of thought, and if that train of thought is flawed, then notionally answering that question will simply perpetuate the flawed thinking. I now often frustrate my colleagues by answering their questions with other questions :).
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I now often frustrate my colleagues by answering their questions with other questions
:)

Or as I like to say, questions are more important than answers. A poor answer to a good question may often be better than a good answer to a bad question ... or the wrong question.

Rule 8 from my blog 28 Rules of Thumb, which are all heuristics: question everything, including this.

Heuristics are how we get speed. Experts use them a lot. Careful rational thought takes too much time. If the heuristics do not properly match the situation they can lead you astray ... sometimes very far astray.

The book Thinking fast and slow (written by the same Kahneman discussed in the article) deals with this situation. Most expert reactions are heuristics. Careful analysis takes time, effort and correction. Artificial intelligence is all about heuristics ... its how you write expert systems.

People do not have an intuitive grasp of a lot of mathematics, with perhaps a few mathematical geniuses as exceptions. We only kind of get it. That is why actually doing the calculations can be so important. Its the difference between science and pseudoscience, engineering and tinkering, rational analysis and guessing.

This is one of the key lines to me:

To Redelmeier the very idea that there was a great deal of uncertainty in medicine went largely unacknowledged by its authorities.

Dealing with uncertainty should be at the core of medicine. Which means every decision needs to be at least acknowledged as potentially wrong. I call this idea Embracing Uncertainty. Things can be done to mitigate risk.


The brain does not have the processing power to reassess everything every single time, so has evolved an extremely efficient way of basing decision making mostly on assumption.

This is not quite how I would put it. I call this problem the Rationalist's Dilemma. Its not rational to expect to be perfectly rational. You usually don't have all the information, the time, or the motivation. Even someone who is super smart and learned much faster than normal could not be rational all the time. There is not enough time in a lifetime to be rational about everything. So the way I see it is we need to limit quality rational thinking to when it is most important ... but at the same time without being completely rational and having perfect information we might not even know when it is most important to be maximally rational. Mere mortals cannot be perfectly rational. So we need to treat rationality, like mathematics, as a tool we apply when we can see a need to.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I think GPs are doing something similar all the time, primarily due to never having the time to get all of the evidence they need to make a good judgement.
The financial imperative to rush through as many patients as possible, especially if governed by fixed payments etc., drives this situation to a large extent. Medicine is in many ways going backwards, even while medical science is still advancing (and taking dead end paths as well).
 
Messages
83
Although probably not practical in an E.R. situation, A.I. based expert systems can be a useful tool in the diagnosis of rarer diseases, and can also be used to help avoid prescribing issues.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Here is an article written by Donald A. Redelmeier, whom the article in the first post is writing about.

Problems for clinical judgement: 1. Eliciting an insightful history of present illness
http://www.cmaj.ca/content/164/5/647.full

The "Ignoble failures" section is interesting. ;)

Also:
Problems for clinical judgement: 2. Obtaining a reliable past medical history
http://www.cmaj.ca/content/164/6/809.full
The "halo effects" and "sequencing effects" are interesting.
 
Last edited:

undiagnosed

Senior Member
Messages
246
Location
United States
Artificial intelligence is all about heuristics ... its how you write expert systems.

While practically feasible artificial intelligence algorithms all use heuristics, there is a reinforcement learning universal artificial intelligence model called AIXI that is theoretically optimal, but not computable. It demonstrates the mathematical limits of how good an artificial intelligence agent can perform in a computable environment. So there are theoretical limits against which heuristic models can be compared.
 
Messages
2,391
Location
UK
Heuristics are fine in closed loop systems where all functions are well understood. Not so in medicine, which is not understood to the same level as most engineered systems.
But of course closed loop control is also used where not all functionality is fully understood, and closing the loop helps cope with the discrepancies. One of the dangers in engineering is arrogantly believing the system is fully understood, and the modelling of it is therefore virtually perfect. Solutions based on that model will therefore also be presumed nigh on perfect. Until real life shows up loopholes in the model, and the "solution" fails to solve. It seems there are some medical "professionals" of the same persuasion.

But I agree with you, heuristics, expert systems, especially for medical use, need treating with considerable caution.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
reinforcement learning
Reinforcement learning has major limitations, regardless of the system. Its how science used to work, but (aside from poor research practices in disciplines like psychiatry) it was abandoned in science as it can lead to bad decision making and hypotheses.

AIXI is new to me, I will have to look it up. I was taught AI, and taught AI, in the early 90s. I would like to know how it gets around local minima and maxima.