Why do knowers of Bayes's Theorem still commit the Base Rate Fallacy?
Source: 6 July 2014, "Do doctors understand test results?" by William Kremer, BBC World Service.
A 50-year-old woman, with no prior symptoms of breast cancer, participates in routine mammography screening. She tests positive, is alarmed, and wants to know from the physician whether she has breast cancer or what the chances are. Apart from the screening results, the physician knows nothing else about this woman. How many women who test positive actually have breast cancer?
a. nine in 10 b. eight in 10 c. one in 10 d. one in 100
Additional (needed) information: The probability that a woman has breast cancer is 1% ("prevalence" or base rate probability). If a woman has breast cancer, the probability that she tests positive is 90% ("sensitivity" or reliability rating). If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% ("false alarm rate" or false-positive rate).
I have used a similar example when teaching introductory probability and statistics for about 30 years now, and feel most instructors use some variation of this problem (if for no reason other than) as an example of Bayes's Theorem.
What amazes me is that only 21% of gynaecologists responded (as reported by the author) with the correct answer (let us please leave the solution of this problem for a student).
My more cosmic questions: 1. Why do our students not understand this basic application of Bayes's Theorem? 2. What can we as instructors do to explain this prior and posterior probability concept more clearly? 3. Why do the trained experts (gynaecologists) get this question wrong?
Solution 1:
I think the problem is with the way we usually solve the problem. The students follow a procedure to get an answer (like the tree solution shown in the article) but nothing is learned about the structure of the problem. Whatever the answer is, we move on to the next problem. Let $D$ be the event 'has disease' and D's complement be $D^c.$ And "+" represents "the test gives a positive response." Then we can write Bayes' Rule as $$\frac{P(D|+)}{P(D^c|+)}=\frac{P(+|D)}{P(+|D^c)} \frac{P(D)}{P(D^c)} $$
$$\frac{P(D|+)}{1-P(D|+)}= \frac{\text{Sensitivity}}{\text{False Positive}}\frac{P(D)}{1-P(D)} $$
From left to right, this is: the posterior odds equals the Likelihood ratio times the prior odds ratio. The term $P(D)/[1-P(D)]$ is the prior odds of having the disease or 1/99 in this example. Note that these are "odds for an event," in this case having the disease. Gambling odds in Las Vegas terminology are "odds against." I think odds for an event are a little easier to understand since then odds and probability are monotonically related. The Likelihood ratio is not an odds ratio like the other two terms. That is, the first and third ratios are of the form $p/(1-p)$ and are just a different way of expressing probabilities. The middle term can be thought of as an amplification factor which turns the prior odds into posterior odds. If it is equal to $1$, a + result from the test provides us no additional information. We would hope for a large number for this ratio. In this example we have a ratio of 10. So this converts our prior odds of 1/99 to posterior odds of 10/99. To convert the posterior odds into probability, use: $\dfrac{\text{posterior odds}}{1+ \text{posterior odds}}$. So we get $P(D|+)=10/109.$ We see a positive test result raises the probability of disease from $0.01$ to $0.092$.
We clearly see that if the test had higher Sensitivity or a lower False Positive rate, the Likelihood ratio would increase and we would wind up with higher posterior odds.
The other quantity of interest is $P(D|-),$ namely the chance that you have the disease although you got a negative test result. (Remember: negative results are good and positive results are bad in health screening.)
$$\frac{P(D|-)}{1-P(D|-)}= \frac{1-\text{Sensitivity}}{1-\text{False Positive}}\frac{P(D)}{1-P(D)} $$
This starts out at $1/99$ as in the previous example and has a Likelihood ratio of $1/9.$ The posterior odds of $1/891$ converts into $P(D|-)=1/892.$ So you can relax if you get a negative result. The test result has lowered your disease probability from $0.01$ to $1/892.$ Note that here we are starting with the same disease odds $P(D)/[1-P(D)]$ but since we are now looking at a "-" result, an informative test will reduce those odds which is what the $1/9$ ratio does. If you prefer Likelihood ratios that are greater than 1 for a good test, just reciprocate each factor.
I think this approach is desirable since it clearly separates the prior information from the information obtained from the test. And is shows Bayes Rule as a factor that modifies the prior odds depending on how good the test is.
While this may appear as a approach only a Bayesian statistician would use, it does not require a Bayesian statistical viewpoint.
Solution 2:
As a sometimes instructor of probability, I will take a shot at this.
Answer to (1): I think it is nonintuitive. Students may be in a hurry to learn a formula to get a homework or test problem right. Their goal may not be really understanding. There are also a number of concepts in the world that get learned for class and then forgotten. (This has been studies a lot in physics, where even something as basic as the force of gravity acting on a ball is not well understood). Then later in the heat of the moment it is so much easier to think the test (for cancer) is 90% accurate, and so a positive test implies a 90% chance of cancer.
Answer to (2): One suggestion, and this is also mentioned in the article you reference, is to use actual numbers. In the above situation, let's say there is a population of 100,000 women. Of these, 1$\%$, or 1000 have breast cancer. Of these, 90$\%$ or 900 will test positive. Of the remaining 99,000 (cancer-free) women, 9$\%$, or 8910 will test (falsely) positive. We know we got a positive test result, so it comes from a pool of 9810 positive tests. But only 900 of these are the result of people who have cancer. So the chance of cancer is still only $\frac{900}{9810}$.
It is hoped that seeing the actual numbers might be more convincing, and give a better understanding of what's going on with Bayes' Theorem.