I've been searching through many books on Bayesian inference but I still can't find anything easy to read and understand the basics of Bayesian hypothesis testing.
I mean I know about prior and posterior distributions. But then I've met the terms like acceptance level, loss function, Bayes factor, least favorable answer, posterior probability, (posterior) quantiles, credible interval, highest posterior density (intervals), ROPE..., and I'm affraid I'm confused about them.
I just don't see what everything (and how) I can use when testing the same hypotheses as in the classical approach (about mean with un/known variance, about ratio of variances, correlation coefficient, probabilities and two-sample tests).
I'm not sure, what everything can be done in Bayesian testing the way I'm used from frequentist testing: p-values, significance level alpha, power and power function, type I and type II errors...
Can you give me an example what to do when I'm given e.g. $Y_1,...,Y_n|\mu,\sigma^2 \sim N (\mu,\sigma^2)$ with a known $\sigma^2$ and I am to test:
$H_0:\mu=\mu_0,\; H_1:\mu\not=\mu_0;$
and
$H_0:\mu\geq\mu_0,\; H_1:\mu<\mu_0.$
I think it would be easier for me to exactly see the steps that need to be taken. The testing process. As it is in classical approach: test statistic and its distibution, select a significance level, critical region, the observed value of the test statistic, decision - reject or not reject the null hypothesis in favor of the alternative.
I'd be so much thankful for any advice :)
Remember that bayesian inference is done through the posterior distribution then, whenever you're testing hypothesis you've a probability distribution to operate over.
Like so, if you're testing for example $H_0: \mu > k$ you simply calculate that particular probability from your posterior.
Fun fact, you could even perform hypothesis testing just a-priori using just the prior.