What does the notation $P(B) = \sum\limits_{A} P(B | A=True) P(A=True)$ mean?

48 Views Asked by At

In the book Risk Assessment and Decision Analysis with Bayesian Networks by Fenton & Neil, Second Edition I repeatedly find a particular kind of notation in which the authors take a sum of all values of a variable for which the value is known.

E.g. on page 161, Box 7.2, using an example with two binary variables A, B the authors repeat the definition of the marginal

$$ P(B) = \sum\limits_{A} P(B | A) P(A) $$

and then introduce the information $A=True$, which leads to

$$ P(B) = \sum\limits_{A} P(B | A=True) P(A=True) $$

I have some trouble understanding what this sum means. My best guess is that $P(A=true)$ when summed over all possible values of $A$ acts as some sort of indicator function that is $1$ for the value $True$ and $0$ for $False$. In that case I don't understand why there is a sum anyway, but ok.

The problem is that I don't see the interpretation as an indicator function consistent with some other statements in the book.

For example, in the same box the authors write

$$ P(B|A=true) = \sum\limits_{A} \frac{P(B, A=True)}{P(A=True)}, $$

which makes no sense if $P(A=True)$ when summed over $A$ should be used as an indicator.

My question would be if

$$ P(B) = \sum\limits_{A} P(B | A=True) P(A=True) $$

is standard notation and if yes, what does this sum mean.