• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A test for a disease is 99% accurate, and you test positive, so that means you've a 99% change of having the disease, right? No, very wrong!

Hip

Senior Member
Messages
17,858
A fascinating and easy to follow short video explains Bayes law, as applied to medical tests.

It tells a story of going to a doctor because you feel ill, and the doctor thinks you might have a quite rare disease which affects 0.1% of the population.

So the doctor orders a blood test that can accurately detect this disease with 99% sensitivity (meaning the test correctly detects the presence of this disease in 99% of cases, and only produces a false positive result in 1% of cases).

The results of your test come back, and the test finds you positive.


So given this positive test result, what are the chances that you actually have the disease?

Most people would probably say that there is a 99% chance they have the disease.

But in fact that's completely wrong! The video explains that because of Bayes law, the chances of you having this disease given the positive test result are only 9%!

See the video at 2:06 for an easy to understand explanation.
 
Last edited:

Markus83

Senior Member
Messages
277
Didn't watch the video...

orders a blood test that can accurately detect this disease with 99% sensitivity (meaning the test correctly detects the presence of this disease in 99% of cases,
Correct.

and only produces a false positive result in 1% of cases).
Not correct. Sensitivity does not say anything about the probability of false positives. This information is given by the specifity (probability that a person which doesn't have the disease tests negative).

The bottom line on Hips example is that if you test for rare diseases (e.g. cancer screening - especially in younger people) a positive test result is most likely a false positive. I had that with a friend. He told me via telephone: "I very likely do have cancer." His doctor ordered a test (CEA) because my friend had weight loss and night sweats. This test turned out to be positive. I told my friend that he most likely doesn't have cancer and that the test is probably false positive, for example because of the chronic inflammation process (he has). Then he got a CT from the lungs, abdomen, ultrasonic, urologic examination, stomach and gut investigation, thyroid sonography and so on - everything was normal, he had no cancer. The problem was with his doctor: You should never ever use a tumor marker for cancer screening (except for very few exceptions like PSA).

# Lets do the math on Hips example with a test sensitivity of 0.99 and a (lesser) specifity of 0.95 and an incidence of disease in 1 of 1000. Lets say we screen thousand patients with the test of whom one has the disease and all others are healthy. With a test sensitivity of 99 %, we can practically assume that the one ill person is correctly positive.

Now, how many of the healthy people get tested positive? We screen 999 people and the test specifity of 0,95 says that 95 % of healty people correctly get a negative test result. That means that about 50 people (5% of 999) get a false positive test. Together with the one patient who got tested positive correctly, we have alltogether 51 positives, of whom only 1 really has the disease. So if you get a positive test result, your chance is 1 in 51 that you really have the disease. 98 % (50/51) is your chance that your test is false positive and you don't have the disease.

Practically that means that you should look for validated tests which come along with numbers for sensitivity and specifity, where specifity should by high; at least 95 % and that you only test for diseases that are likely to some extent that you have them (because of history, symptoms, etc.).
 

Hip

Senior Member
Messages
17,858
@Markus83, you are right, I don't think I wrote it correctly. I am pretty brain fogged at present, so cannot think clearly. Perhaps you would like to translate the statement in the video into the language of test sensitivity and specificity:

Here's what the video said:
The test will correctly identify 99% of people that have the disease, and only incorrectly identify 1% of the people who don't have the disease.
 

Markus83

Senior Member
Messages
277
The test will correctly identify 99% of people that have the disease,
This means the test has a sensitivity of 0,99 or 99 %.

and only incorrectly identify 1% of the people who don't have the disease.
He wants to say, that 1 % of the people that don't have the disease incorrectly got tested positive. This translates to a specifity of 0,99 or 99 %.

In this example sensitivity has the same value as specifity, namely 0,99 or 99 %. To make the difference a little clearer, I used a specifity of 0,95 and a sensitivity of 0,99 in my example.
 

Hip

Senior Member
Messages
17,858
@Markus83 would you be able to write the general Bayesian equation for the probably P that you actually have the disease when you got a positive result on a test which has sensitivity Sn and specificity Sp, and the fraction of general population with disease is F percent?


It seems to me that when a disease is quite rare (so a low probability of someone having it to begin with), that is when Bayes law becomes important in medical testing.
 

percyval577

nucleus caudatus et al
Messages
1,302
Location
Ik waak up
It´s two different evaluations, one on the test-population relation, and the second one on test-patient relation.
Bayes Theorem is on the test-patient relation, comparable to the second one below.
(That one is in fact ill or not must be known of course by not the test we would talk about.)

...................................in fact ill..........in fact not ill
test positive...................a.........................m
test negativ....................b.........................n

sensitivity= a/(a+b)...................................if there are no b´s the test detects 100% (=1.0) of ill ppl
specifity = n/(m+n).................................. if n=5 and m=5 the test detects 50% (=0.5) of non ill ppl


pos.prog = a/(a+m)...................................if there are no m´s, a positive test says a tested person is ill for sure
neg.prog = n/(b+n)...................................if n=5 and b=5 a negative test says you are healthy with a likelyhood of 50%
 
Last edited:

Markus83

Senior Member
Messages
277
P = correct positives / (correct positves + false positves) (*1*)

You need to multiply P with 100 if you want to have your result in percent.

Sn = sensitivity, for example 0,99
Sp = specifity, for example 0,95
F = fraction of general population with disease, for example 0,001 (1 in 1000)

Then correct positives are given with: F*Sn (*2*)
False positives are given with (1-F)*(1-Sp) (*3*)
where (1-F) is fraction of population without the disease, and (1-Sp) is part of healthy population which get tested positive

(*2*) and (*3*) inserted in (*1*) give:

P = F*Sn / (F*Sn + (1-F)*(1-Sp))

That should be the formula. As I said, multiply P with 100 gives you the likelyhood in percentage that you really have the disease in case of a positive test result under the given F.

Hope this is correct...I'm tired now :)

@Hip: Don't know if this is what you wanted; don't know what is Bayesian equation and didn't look it up.
 

Hip

Senior Member
Messages
17,858
Thank you for that equation, @Markus83 (sorry to tire you out!).

So if in future a good test for ME/CFS were developed, and that test has for example 99% sensitivity and 99% specificity, given that 0.2% of the population have ME/CFS, if a randomly chosen person were positive on this test, the chance (expressed as a percentage) that they actually have ME/CFS would be:

Chance that they have ME/CFS = 100 * F * Sn / (F * Sn + (1 - F) * (1 - Sp))

= 100 * 0.002 * 0.99 / (0.002 * 0.99 + (1 - 0.002) * (1 - 0.99))

= 16.6%
 
Last edited:

percyval577

nucleus caudatus et al
Messages
1,302
Location
Ik waak up
P = correct positives / (correct positves + false positves) (*1*)
....
P = F*Sn / ( F*Sn + (1-F)*(1-Sp) )
There is also an N version of Bayes theorem:

N = (1-F)*Sp / ( F*(1-Sn) + (1-F)*Sp )

So in Hip´s example, if the test is negative

= 100 * (1-0.002)*0.99 / ( 0.002*(1-0.99) + (1-0.002)*0.99 )

= 100 * 0.99802 / 0.98804 = 99.998 % ................................................chance .that you don´t have mecfs
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Tests for rare diseases are not given to just anyone, they are given based on clinical judgement, hence I would not have much confidence that "the chances of you having this disease given the positive test result are only 9%" is a reasonable estimate.

But then you have to ask, where do the sensitivity and specificity measures come from? If there is no "gold standard" foundation, all you have is circular reasoning. Yet foundationalism itself is just a philosophical point of view. Where does it end? See also: https://en.wikipedia.org/wiki/Pancritical_rationalism
 
Last edited:

Hip

Senior Member
Messages
17,858
Tests for rare diseases are not given to just anyone, they are given based on clinical judgement, hence I would not have much confidence that "the chances of you having this disease given the positive test result are only 9%" is a reasonable estimate.

Yes, that's true. The above scenario only applies to blood tests that are randomly given. If the test is given to a patient because they are already showing symptoms of that rare disease, then it increases the prior probability that they have the disease, and that prior probability would have to be factored into the Bayes law equation somehow.