Testing hypotheses about parameters using posterior distributions

37 Views Asked by At

My doubt's the following: I have a parameter $\theta$ and I am seeking to evaluate the hypothesis that $\theta > c$, where $c \in (0, 1)$ using Bayesian methods.

The parameter has a known prior $\pi (\theta)$ which follows a Beta distribution $\mathcal{B} (\alpha_s , \beta_s)$, furthermore, some data $X$ is gathered and it has density $f(X \mid \theta)$, which also follows a Beta distribution $\mathcal{B} (\alpha_f , \beta_f)$. The posterior distribution of $\theta$ can thus be found using Bayes' Rule for continuous priors and likelihoods.

I am relatively new to Bayesian methods and come from a frequentist background, so my reasoning would tell me that the likelihood that $\theta > c$ given the posterior $\pi (\theta \mid X)$ is given by the survival function of $c$ given the distribution, i.e. $s(c \mid \pi (\theta \mid X))$. I would essentially be computing the $p$-value of $c$ in the posterior distribution, and I know most Bayesian statisticians hate using $p$-values. I've looked into Bayes factors but I have seen them being brought up only in the context of model selection.

What would be a more Bayesian way to evaluate this hypothesis, or is my reasoning acceptable?