I've been studying probability to develop a more intuitive sense of calculating probabilities as a medical practitioner. One example that came up in discussing the importance of prior probabilities was HIV testing. In the example the book The Laws of Medicine by Siddhartha Mukherjee gives, he gives the problem of calculating the probability of someone having HIV given that they have a positive ELISA test. Here's what we know:
The probability of HIV being present in the population: $P(A) = \frac{1}{1000}$
The probability of a false positive (Getting a positive test ($B$) result given that the person doesn't have HIV ($A'$)): $P(B|A') = \frac{1}{1000}$
We want to find the probability that someone has HIV given that they have a positive test result: $P(A|B)$?
My first thought is that we don't have enough information to solve this. Here's the math to back this up:
$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{P(A)P(B|A)}{P(B)}$
From the information we're given, there's no way of finding $P(B|A)$. So based on the information given in the book, there's no way of calculating $P(A|B)$. Is this right? If not, I'd like to find a pathway towards calculating $P(B|A)$ using the information I'm provided. The big thing here for me is to understand intuitively when some probability isn't computable and if it is computable, is there a simple/intuitive way to do it. Thanks!
Rough intuition: imagine a population of 1000 people that perfectly reflects the statistics. Then you'll get one positive test from the one infected person (assuming there are negligible false negatives) and one positive test from the 999 people who are HIV free. So the probability of a positive test indicating real illness is 1/2.
I think this method (sometimes called "natural frequencies") offers more insight than Bayes' Theorem on conditional probabilities.
Read http://opinionator.blogs.nytimes.com/2010/04/25/chances-are/ to see how often medical practitioners get this wrong.