Assume controls $H_0 \sim \mathcal{N}(\mu_0, \sigma_0)$ and cases $H_1 \sim \mathcal{N}(\mu_1, \sigma_1)$, where $\mu_0 < \mu_1$ and $\sigma_0 = \sigma_1$. Let $c_1$ be the only intersection of the two distributions, i.e.,
$$f_H{_0}(c_1)−f_{H_1}(c_1)=0$$ Figure 1 shows an example. bi-normal curves.
Let $\alpha$ be the area of the $H_0$-distribution to the right of a decision threshold (False Positive errors) and $\beta$ be the area of the H1 distribution to the left of the decision threshold (False Negative errors).
Let c be the point where the minimum of $(\alpha + \beta)$ is reached $$\min(\alpha + \beta)$$ $$=\min[(1-P_{H_0}(H_0 < c)+P_{H_1}(H_1 < c)]$$ $$=\min[\int^c_{-\infty}(f_{H_1}(t))dt-\int^c_{-\infty}(f_{H_0}(t))dt+1]$$
Simulations in R showed that $c = c_1$ (at least, very near). How can you prove (or disprove) that $c = c_1$?
N.B. Schisterman, E. F., Perkins, N. J., Liu, A., & Bondell, H. (2005). Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples. Epidemiology, page 73–81, gave prove for a closely related problem, which I cannot reproduce (c.q., fail to understand).
In the case of a single threshold:
If c < c1, then $c \in (- \infty, c1)$
$$\int_{-{\infty}}^{c1} f_{H1}(t)dt-f_{H0}(t)dt= \int_{-{\infty}}^{c} f_{H1}(t)dt-f_{H0}(t)dt + \int_{c}^{c1} f_{H1}(t)dt-f_{H0}(t)dt$$ $$\int_{-{\infty}}^{c} f_{H1}(t)dt-f_{H0}(t)dt= \int_{-{\infty}}^{c1} f_{H1}(t)dt-f_{H0}(t)dt-\int_{c}^{c1} f_{H1}(t)dt-f_{H0}(t)dt$$ For values < c1 $f_H0 > f_H1 \Rightarrow -\int_c^{c1} f_{H1}(t)dt-f_{H0}(t)dt > 0$
Therefore, the minimum is reached when c = c1.
Likewise, if c > c1, then $c1 \in (- \infty, c)$ $$\int_{-\infty}^{c} f_{H1}(t)dt-f_{H0}(t)dt= \int_{-\infty}^{c1} f_{H1}(t)dt-f_{H0}(t)dt + \int_{c1}^{c} f_{H1}(t)dt-f_{H0}(t)dt$$
For values > c1 $f_H1 > f_H0 \Rightarrow \int_c^{c1} f_{H1}(t)dt-f_{H0}(t)dt > 0$
Therefore, $min(\int_{-\infty}^{c} f_{H1}(t)dt-f_{H0}(t)dt)$ is reached when c = c1.
Any comment is welcome.