Bayesian Updating - plug in previous posterior for prior?

114 Views Asked by At

Let's say I have two sequences of observations, $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$. For each sequence I'm going to estimate the probabilities of certain events occurring, namely event $A$ in $(a_1,\ldots,a_n)$ and event $B$ in $(b_1,\ldots,b_n)$. For example, $P(A) = \frac{\#\text{observations that are in $A$}}{n}$ and $P(B) = \frac{\#\text{observations that are in $B$}}{n}$.

With these probabilities I also want to estimate the conditional probability using $$ P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{\frac{\#\text{observations that are in $A$ and $B$}}{n}}{\frac{\#\text{observations that are in $B$}}{n}}. \qquad (1) $$

Now let's say I receive updated sequences $(a_1,\ldots,a_n, a_{n+1})$ and $(b_1,\ldots,b_n, b_{n+1})$, and with this new information I'd again like to compute $P(A^*|B^*)$ where $A^*$ occurs in $(a_1,\ldots,a_n, a_{n+1})$ and $B^*$ occurs in $(b_1,\ldots,b_n, b_{n+1})$.

I could again compute $P(A^*|B^*)$ as $$ P(A^*|B^*) = \frac{\frac{\#\text{observations that are in $A^*$ and $B^*$}}{n+1}}{\frac{\#\text{observations that are in $B^*$}}{n+1}}, \qquad (2) $$ or use Bayes' rule, $$ P(A^*|B^*) = \frac{P(B^*|A^*)P(A^*)}{P(B^*)}, $$ which of course would result in the same answer as just using the definition of conditional probability $(2)$.

OTOH I've read about "Bayesian updating", but my best guess on how to use is would be $$ P(A^*|B^*) = \frac{P(B^*|A^*)P(A|B)}{P(B^*)}, \qquad (3) $$ where $P(A|B)$ was computed using $(1)$. In words, I'm updating my prior using my previous posterior. However, I don't think $(3)$ is mathematically true, so what is the proper way to use "Bayesian updating"?