Rate of convergence of Bayesian posterior

365 Views Asked by At

Suppose a data generating process (DGP) is parameterized by some unknown parameter $\theta_0$, say $P_{\theta_0}$, and we want to estimate the value of $\theta_0$ using Bayesian method. Let $\pi(\theta)$ be the prior over the possible values of $\theta$, $\Theta$.

I understand that for almost all priors, if the data observed are iid, then the Bayesian posterior will eventually concentrate on $\theta_0$, as the number of observations ($n$) tends to infinity. (Correct me if I'm wrong.)

My question: Are there any results that predict differential rates of convergence of the posterior distribution based on different priors?

For example, suppose $\Theta=\{\theta_0,\theta_1\}$ and the DGP is $P_{\theta_0}$. Consider two priors on $\Theta$: $$\pi_1(\theta_0)=0.2,\qquad \pi_1(\theta_1)=0.8$$ and $$\pi_2(\theta_0)=\pi_2(\theta_1)=0.5.$$ Are there any results that says the posterior based on $\pi_2$ converges faster than that based on $\pi_1$? (or the other way around?)

Intuitively I would expect the posterior based on $\pi_2$ to have faster convergence, as it is "closer" to $\theta_0$. But in my experience, intuition is hardly reliable when it comes to probability theory.