Hypothesis test in Bayesian statistics

196 Views Asked by At

Let $X\sim N(\theta,1)$ and 5 independent observations $X=(4.9,5.6,5.1,4.6,3.6)$. The prior probability that $\theta=4.01$ is $0.5$. The remain values of $\theta$ are given the density of $g(\theta)$.

a)Assume $g(\theta)\sim N(4.01,1)$ test the hypothesis $$H_0:\theta=4.01\space vs\space H_1:\theta\neq 4.01$$

From what I learn to make a hypothesis test I need to find $$a_0=P(\theta\in\Theta_0|x)$$ such that $$a_0+a_1=1$$ and reject $H_0$ if $$a_0<a_1$$ in the cases where the null hypothesis is not a point I can make, but in this case I have a few doubts.

From the notes that I take there is the theorem below

Theorem: For any prior $$\pi(\theta)=\pi_0\space \text{if}\space \theta=\theta_0$$ $$\pi(\theta)=\pi_1 h(\theta)\space\text{if}\space \theta\neq \theta_0$$ such that $$\int_{\theta\neq > \theta_0}g(\theta)d(\theta)=1$$ then $$a_0=f(\theta|x)\geq [1+\frac{1-\pi_0}{\pi_0}\frac{r(x)}{f(x|\theta_0)}]^{-1}$$ where $$r(x)=sup_{\theta\neq\theta_0}f(x|\theta)$$ usually $$r(x)=f(x|\hat{\theta})$$

In this case $\hat{\theta}=\overline{X}$ but the distribution of $$f(x|\overline{X})$$ doesn't make sense to me, in one example that I look they take $$f(\overline{x}|\hat{\theta})$$ but I don't understood the logic.

I need to use the distribution of the likelihood estimator supposing that $\theta=\hat{\theta}$?

If someone can give me a explanation with details on how it works I really appreciate.