A classic example often used for explaining Bayes theorem is through the use of the disease sensitivity example.
p(A = 1) = Have disease = 0.01
p(A = 0) = Not have disease = .99
p(T = 1 | A = 1) = Have disease given test positive = .95
p(T = 0 | A = 0) = Not have disease Given test is negative = 0.05
$$\frac{p(T = 1 | A = 1) * p(A = 1)}{p(T = 1 | A = 1) * p(A = 1) + p(T = 0 | A = 0) * p(A = 0)}$$
which goes out to be
$$\frac{.01 * .95}{.01 * .95 + .05 * .99} = \frac{.0095}{.0095 + .0495} = \frac{.0095}{.059} = 0.16$$
However one thing I am having trouble finding is an explanation of how the p(T = 1 | A = 1) is calculated. How does one determine the effectiveness of the test?
I was thinking of brute forcing the value by using a sample data and adjusting the p(T = 1 | A = 1) until I achieved obtained the maximum accuracy possible. But there has to be a more appropriate method.
Another method was to flip the calculation and instead calculate the probability of the accuracy of the test.
p(T = 1) = Test is correct = 0.5
p(T = 0) = Test is not correct = 0.5
p(A = 1 | T = 1) = Test is positive given person has disease = 1
p(A = 0 | T = 0) = Test is negative given person has no disease = 0
but I run into a logical road block here as diseases are 0 or 1 (have it or don't) which results in the following
$$\frac{p(A = 1 | T = 1) * p(T = 1)}{p(A = 1 | T = 1) * p(T = 1) + p(A = 0 | T = 0) * p(T = 0)} = \frac{0.5 * 1}{0.5 * 1 + 0.5 * 0} = 1 $$