Trying to solve the following question:
Suppose that under $H_0$, a measurement $X$ is $N(0,σ^2)$, and that under $H_1$, $X$ is $N(1,σ^2)$ and that the prior probability $P(H_0) = 2×P(H_1)$.The hypothesis $H_0$ will be chosen if $P(H_0|x) > P(H_1|x)$. In the long run, what proportion of the time will $H_0$ be chosen if $H_0$ is true $2/3$ of the time?
I have a hint that somehow I need to represent the ratio of posterior probabilities in terms of $\overline{X_n}$ and then apply Chebyshev's Inequality. This hint confused me even more. Any ideas?
We have $$ P(H_0\mid x) = \frac{f(x\mid H_0)P(H_0)}{f(x\mid H_0)P(H_0)+ f(x\mid H_1)P(H_1)} = \frac{2e^{-\frac{x^2}{2\sigma^2}}}{2e^{-\frac{x^2}{2\sigma^2}}+ e^{-\frac{(x-1)^2}{2\sigma^2}}} = \frac{2}{2+e^{\frac{2x-1}{2\sigma^2}}}.$$ Setting $$P(H_0\mid x) > 1/2$$ gives a decision boundary of $$x < \frac{1}{2} +\sigma^2\log(2).$$
So in the long run, your probability of deciding on $H_0$ is $$ P\left(X\le \frac{1}{2} + \sigma^2\log(2)\right) = P\left(X\le \frac{1}{2} + \sigma^2\log(2)\mid H_0\right)P(H_0) + P\left(X\le \frac{1}{2} + \sigma^2\log(2)\mid H_1\right)P(H_1)\\= \Phi\left(\sigma\log(2) + \frac{1}{2\sigma}\right)\frac{2}{3} + \Phi\left(\sigma\log(2) -\frac{1}{2\sigma}\right)\frac{1}{3}.$$
This is how I would do the problem 'as written'. Your hint confuses me as well. I suppose we could replace all the $\sigma$ with $\sigma/\sqrt{n}$ if we have $n$ samples for each trial. Chebyshev seems odd here since it's unclear we are looking for a tail bound (and also we have a normal distribution so could get a much better one if we were).