When we talk about conditional probability, do we usually update the sigma-algebra too?

80 Views Asked by At

In Casella and Berger's textbook Statistical Inference Definition 1.3.2, when talking about conditional probability $P(A|B)=\frac{P(A\cap B)}{P(B)}$, $A$ and $B$ are assumed to be two events in the original sample space $S$, and thus they are both elements of the original sigma algebra $\mathcal{B}$. My question is why we do not restrict $A$ to be a set that is an element of a new sigma algebra containing only subsets of $B$? Given that we have updated our sample space from $S$ to $B$, this seems natural.

1

There are 1 best solutions below

1
On BEST ANSWER

What you're suggesting is effectively to define $\ P(A\,|\,B)\ $ only for sets $\ A\ $ lying in the sigma algebra $\ \{\,S\cap B\,|\,S\in\mathcal{B}\,\}\ $. One reason why Casella and Berger would not do this is that it would vitiate some of the applications of conditional probabilities they make use of later in the book—most notably Bayesian hypothesis testing in section $8.2.2$ (p.$379$ of the second edition).

Bayesian hypothesis testing relies on a method of updating the relative probabilities of various hypotheses $\ H_1, H_2,\dots\ $, regarded as events in the sigma algebra $\ \mathcal{B}\ $, in the light of the fact that some other event $\ X\ $ in $\ \mathcal{B}\ $ is known to have occurred. The formula for the updated probability of $\ H_i\ $ relative to that of $\ H_j\ $ is $$ \frac{P\big(\,H_i\,\big|\,X\big)}{P\big(\,H_j\,\big|\,X\big)}=\frac{P\big(X\,\big|\,H_i\big)P\big(H_i\big)}{P\big(\,X\,\big|\,H_j\big)P\big(H_j\big)}\ , $$ which follows from the almost trivial fact that both sides of the identity are equal to $\ \frac{P(H_i\cap X)}{P(H_j\cap X)}\ $. If this formula is to be of any use, however, the first argument in the conditional probabilities must not be restricted in the way you have suggested.