Total probability on a vector of Bernoulli random variables

86 Views Asked by At

My question concerns text following Equation 3.4.8 in Varshney (pg. 76):

Varshney, P. K. (2012). Distributed detection and data fusion. Springer Science & Business Media.

Seems trivial, but I don't understand a crucial detail:

Definitions:

Let $\mathbf{u}$ $=$ $[u_{1}, u_{2},\cdots, u_{N}]$, where $u_{k}$ $=$ $\{0,1\}$ for $1\le k \le N$. That is, my notation states that $u_{k}$ is a binary random variable. It takes on value $1$ with probability $p_{k}$, but that won't matter to this question:

Let: $\mathbf{u}^{k}$ $=$ $[u_{1}, u_{2},\cdots, u_{k-1},u_{k+1}, \cdots, u_{N} ]$. That is, $\mathbf{u}^{k}$ is $\mathbf{u}$ with element $k$ missing.

Now let: $\mathbf{u}^{kj}$ $=$ $[u_{1}, u_{2},\cdots, u_{k-1}, u_{k} = j, u_{k+1}, \cdots, u_{N} ]$. That is, $\mathbf{u}^{kj}$ is $\mathbf{u}$, with $u_{k} = j$, where $j$ can only be one or zero.

Question:

Why is it that: $\text{Pr}\left( \mathbf{u}^{k0} \right)$ $=$ $\text{Pr}\left( \mathbf{u}^{k} \right)$ $-$ $\text{Pr}\left( \mathbf{u}^{k1} \right)$ ?

in which $\text{Pr}$ is probability. It appears $u_{k}$ is marginalized out in $\text{Pr}\left( \mathbf{u}^{k} \right)$. Is this just a statement of the Law of Total Probability? That seems applicable if events $\mathbf{u}^{k0}$ and $\mathbf{u}^{k1}$ are a partition of sample space. It is unclear to me if they are.

2

There are 2 best solutions below

0
On

Yes that is the law of total probability, since $u_k$ can take only two values. If you treat the sample space as $\sigma(u)$ any marginal events is contained in this cylindrical algebra.

0
On

I thank Green Tea for their response. I concede that I had misapplied the total probability law. I misidentified the marginal sum to be:

$\text{Pr}\left( \mathbf{u}^{k}\right)$ $=$ $\text{Pr}\left( \mathbf{u}^{k0}\right) \cdot \text{Pr}\left( u_{k} = 0 \right)$ + $\text{Pr}\left( \mathbf{u}^{k1}\right) \cdot \text{Pr}\left( u_{k} = 1 \right)$.

If that were true, then the total probability would be:

$\text{Pr}\left( \mathbf{u}^{k}\right)$ $=$ $\text{Pr}\left( \mathbf{u}^{k0}\right) \cdot p_{k}$ $+$ $\text{Pr}\left( \mathbf{u}^{k1}\right) \cdot(1-p_{k})$.

which is different from what appears in Varshney. However, the erroneous reasoning I just applied mis-identifies $\text{Pr}\left( \mathbf{u}^{k0}\right)$ as a conditional PMF. What I should use is the expression for the marginal PMF:

$\text{Pr}\left( \mathbf{u}^{k}\right)$ $=$ $\displaystyle \sum_{j}$ $\text{Pr}\left( \mathbf{u}^{kj}\right)$.

I provided this answer, in the event that my own justification for my (initial) erroneous reasoning is itself incorrect.