Let's say we have a series of coin flips where there is some unknown bias $p$.
How fast does our estimate of the bias converge to $p$?
An approach: concentration of measure
A typical estimate for $p$ after $n$ samples would be $\pi_n = \frac{\#heads}{1- \#tails}$. We can then consider the difference $$D_n = |\pi_n - p|$$ which we expect to go to $0$ as the number of samples increases. The Markov inequality gives $$P(D_n \geq \epsilon) \leq \frac{1}{\epsilon} \mathbb{E}[D_n].$$
Then $\mathbb{E}[D_n] = |\mathbb{E}[\pi_n] - p| = 0$, which isn't helpful.
Another approach: Bayesian
In this case the posterior is a Beta$(a,b)$ where $a= a_0 + \#heads$, $b= b_0 + \#tails$, and Beta$(a_0,b_0)$ is the prior on $\pi$.
I think it would be true that Beta$(a,b) \rightarrow \delta(p)$ where $\delta(p)$ is the true distribution of the bias. I guess this convergence could be measured with some distance measure on distributions, like KL divergence or Wasserstein distance. But how would one prove this goes to $0$? And how would we know the rate?