Type I and Type II Error conditional vs joint probability

107 Views Asked by At

Suppose that a transmitter sends an electronic signal $X$ that is either $0$ or $1$ with equal probability of each. The signal is corrupted by additive Gaussian noise, having $0$ mean and $\sigma^{2}$ variance and then captured by the receiver.

So $Y = X + \epsilon$ where $Y$ is the received signal, $X$ is the transmitted signal, and $\epsilon \sim \mathcal{N}(0,\sigma^{2})$.

Suppose we have a decision rule that decides the received signal to be $1$ if $Y> \lambda$ and $0$ otherwise. Thus, $d(Y) = 1$ if $Y > \lambda $ and $d(Y) = 0$ otherwise. We wish to find the value of $\lambda$ that minimizes error probability.

I know that an error is defined in the following way: $Error = [ P( d(Y) = 1 \ \cap X = 0) + P( d(Y) = 0 \ \cap X = 1) ] $. The terms actually refer to the type I and type II error probabilities.

My question is why we cannot use the conditional probabilities to define the error: $Error = [ P( d(Y) = 1 \ | X = 0) + P( d(Y) = 0 \ | X = 1) ] $

Why is the first expression correct? What's the intuitive reason for it?