From Think, by Simon Blackburn
Suppose you decide to check yourself out for some disease. Suppose that this disease is quite rare in the population: only about one in a thousand people suffer from it. But you go to your doctor, who says he has a good test for it. The test is in fact over 99 per cent reliable!
Faced with this, you take the test. Then — horrors! — you test positive. You have tested positive, and the test is better than 99 per cent reliable. How bad is your situation, or in other words, what is the chance you have the disease?
Most people say, it’s terrible: you are virtually certain to have the disease.
But suppose, being a thinker, you ask the doctor a bit more about this 99 per cent reliability. Suppose you get this information:
(1) If you have the disease, the test will say you have it.
(2) The test sometimes, but very rarely, gives ‘false positives’. In only a very few cases — around 1 per cent — does it say that someone has the disease when they do not. These two together make up the better than 99 per cent reliability. You might think that you are still virtually certain to have the disease. But in fact this is entirely wrong. Given the facts, your chance of having the disease is a little less than 10 per cent.
Why? Well, suppose 1,000 people take the test. Given the general incidence of the disease (the ‘base rate’), one of them might be expected to have it. The test will say he has it. It will also say that one per cent of the rest of those tested, i.e. roughly ten people, have it. So eleven people might be expected to test positive, of whom only one will have the disease. It is true the news was bad — you have gone from a 1 in 1,000 chance of disease to a 1 in 11 chance — but it is still far more probable that you are healthy than not. Getting this answer wrong is called the fallacy of ignoring the base rate.