Suppose I have a coin, which I want to test for bias. My problem is: surely there's a philosophical problem with defining "bias". Let me illustate with an example.
Firstly, I use a Bayesian approach, and start off with a beta(1,1) distribution.
Suppose that my coin is, in fact biased with 55% of the time coming up heads, contrary to my assumption that it is fair. I flip a coin 1000 times, and I get 550 H and 450 T. So the new probability distribution is beta(551, 451). Given the large number of trials, there is a huge spike in the likelihood at 0.55. But how do I, as a Bayesian, formulate the notion of "biasedness"? I might say: "what is the probablity that posterior beta distribution lies between 0.4 and 0.6" as my notion of of what biasedness means. It will turn out that the probability is close to 1. The problem is, that's an arbitrary definition. How would a Bayesian better frame that question?
From a frequentist point-of-view, OTOH, I can simply do I binomial test. Using R, I type:
binom.test(550, 1000, 0.5, alternative="two.sided")
and out pops the answer for the p-value = 0.001731. I would therefore conclude that the coin was biased.
First notice that with the given prior, $\mathop{Beta}(1,1)$, the probability of the coin being exactly fair is zero (because the distribution is continuous). The same is true of our posterior. So if we ask the question "What's the probability that the coin is fair?" we will just get the answer $0$. From this point there are two things we could do:
The first is to admit that no coin can ever be precisely fair, and so we change our question to "Is the coin fair enough for our purposes?" or "How biased is the coin likely to be?". To answer these questions we can pick an interval around $0.5$ and ask how likely the true frequency of the coin is to lie in that interval. For example, if we wanted our coin to be fair to one part in one hundered, we could ask for the probability that its bias lies in the range $0.5\pm 0.01$. For the $\mathop{Beta}(551,451)$ posterior this comes out to $0.0056\dots$ so we conclude that the coin isn't very likely to be that fair. Alternatively we can choose how sure we want to be and then find intervals for that. Again using the $\mathop{Beta}(551,451)$ distribution, we find that there's a $95\%$ chance that the bias lies outside the range $0.5\pm 0.024\dots$, so the coin is likely to be biased to at least that extent.
The second alternative is to decide that there really is some probability of the coin being exactly fair. In that case you should put that into your prior. For example you could take a prior that is a 50/50 weighted average of the $\mathop{Beta}(1,1)$ distribution and a point mass at $0.5$ (i.e. your density function is $\tfrac 12+\tfrac 12\delta(\theta-0.5)$). This would represent a $50\%$ chance of the coin being fair and a $50\%$ chance of it having some bias. Now the likelihood of seeing the result $(550H,450T)$ when the coin is fair is $$\binom{1000}{550}0.5^{550}0.5^{450}=0.000169\dots$$ meanwhile the likelihood of seeing the result $(550H,450T)$ when the coin is biased is $$\int_0^1\binom{1000}{550}\theta^{550}(1-\theta)^{450} \mathrm d\theta=\frac{1}{1001}.$$ Note that $1/1001:0.000169=0.86:0.14$. So by Bayes' Theorem our posterior will be a weighted mixture in the ratio $0.86:0.14$. The "biased" bit of the distribution also updates as before. So our posterior is a weighted combination of $86\%$ a $\mathop{Beta}(551,451)$ distribution and $14\%$ a point mass at $0.5$. Then we can say that the coin has a $14\%$ chance of being fair and a $86\%$ of being biased.