Your doctor tells you that you have tested positive for Statistics Deficit Disorder, a very rare condition affecting 3 people in a 1,000 (0.3% of the population). She assures you that the test is very accurate: 98% of affected people will get a positive result, and it gives a false positive only 1% of the time. So you go home and spend the evening crying over your misfortune, because surely there is a 99% chance that you are one of those afflicted.
Wrong!
This is another example of applying intuitive logic (that there is an underlying 50/50 risk) to situations which are uncommon.
First consider that, by definition, all the false positives must come from tests carried out on people who do not have the disorder.
Second, that the reported 1% false-positive rate means that one out of every hundred non-affected people tested is given the wrong result.
To take the next step, you must consider the relative size of the affected and non-affected groups. If we assume for this example that the population of your country is 100 million, it will make the calculations a little easier.
We know that 0.3% of the population (that is, 300,000 people) have Statistics Deficit Disorder. 98% of those with the disease will test positive – 98% of 300,000 is 294,000.
99.7% of the population (that is 99,700,000 people) do not have it. 1% of these will be given a false positive result if tested – that is, 1% of 99,700,000, which is 997,000 people.
To get the actual risk of having the disorder, given that you have tested positive, you need to find out what proportion of those tested positive actually have the disorder.
In all, there are 1,291,000 positive results, of which only 294,000 are accurate. The percentage of those with positive test results who actually have it is therefore only 22.77%:
Your feeling that you had a 99% risk of having this terrible condition has actually turned into a 22.77% risk! The test (which had such great credentials) turns out to be inconclusive, because the incidence of the disease is considerably less than 50%. This is true for most diseases and most preliminary investigations, so the likelihood of your actually having the disease is comfortingly small, even if you have a positive test result. This kind of reasoning is known as conditional probability, and has given rise to Bayes Theorem which enables calculations to be made.
So, when the doctor says “We need to do some more tests”, she really means it!
Note that there are some tests where mistakes are impossible except as the result of human error.
Because initial tests are usually non-invasive, doctors use them to look, not directly for disease, but for signs associated with it – this is not completely reliable. However, follow-up tests tend to make use of samples such as biopsies where the disease can be directly detected or ruled out. This results in greater precision, at the cost of greater discomfort, expense and delay.