A while ago I learned about the Bayes Theorem. Among many other things, it allows to compute a probability for some healthy person to get a (false) positive drug test result -i.e. a probability of some event like "getting a positive drug test result" - let's call it $A$ - happening given that some other (independent) event like "being a healthy person" - $B$ - has already happened; within a standard notation it would translate to the $P(A|B)$.
I am trying to make a next obvious step. What is a probability for a healthy person to get a positive drug test result twice? In other words: what is a probability for a false positive to repeat again? It seems that there are two different ways I could approach the problem:
$A_1$ would mean that the person got a positive drug test result at the first iteration. $A_2$ would stand for the second one. In that case I could ask $P(A_1 \cap A_2 | B)$ where $A_1 \cap A_2$ means "event $A_1$ and $A_2$ are both happened". To my understanding, this formulation implies seeing $A_1$ and $A_2$ as independent of each other - and it is crucial. It seems to be not quite what I am initially was looking for; a more right way to ask the question would be:
$P(A_2 | (A_1 | B))$ - what is a probability for $A_2$ to happen given that $A_1$ assumed to happen under $B$. Expanding $A_1$, $A_2$ and $B$ to their original meanings, it asks the following question: "what is a probability for a healthy person ($B$) to get a second false positive drug test result ($A_2$) assuming he or she already got one ($A_1$)?". It seems to be exactly the right way to define the problem. Also, as a side note, it does state an implicit dependency between two false positives ($A_1$, $A_2$) over the $B$.
What I am looking for is confirmation whether my way of thinking is correct or explanation why not. Thanks in advance.
The positive/negative test result given a healthy/sick patient is a classic example used when introducing Bayes. Another popular one is the probability of defective products in a batch. As always with these examples, you have to make simplifying assumptions, that don't always correspond to reality.
In your case, the mathematically correct approach is the first one. As others have noted, the second formulation is ill-posed. Now, whether or not you simplify your problem as expressed in your first formulation by assuming conditional independence is entirely up to you, and will ultimately determine the predictive power of your model.
It seems that you are interested in the probability of the first and second tests both being positive given that the patient is actually healthy. So what you are asking for is $P(T_1 \cap T_2|H)$. You could assume that $T_1 $ and $T_2$ are independent given H and therefore the above becomes $P(T_1|H)P(T_2|H)$. But as you pointed out, this might not correspond to your intuition about the problem.
Hence, you would have to look for other ways to model this distribution. You would collect empirical data and try to fit the joint data-generating distribution. This is starting to enter the realms of mathematical modelling and statistical learning.
Note that you can still apply Bayes theorem without the conditional independence assumption. It would look like:
$$ P(T_1 \cap T_2 | H) = \frac{P(H|T_1 \cap T_2)P(T_1 \cap T_2)}{P(H)} $$
Depending on whether you have information about this likelihood and prior, this could still be useful.