Follow the proof in van der vaart chapter 10. Suppose that $P_{\theta}$ is a family of distribution indexed by some parameter $\theta\in \mathbb{R}^p$. Let $x^n = (x_1, \ldots, x_n)$ be a sample from $P_{\theta_0}$. The classical Bernstein von-mises theorem says that \begin{equation} \label{eq: tv} ||P_{\sqrt{n}(\theta-\hat{\theta}_{MLE}) | x^n} - N(0, I_{\theta_0}^{-1}) ||_{TV} \to 0 \end{equation} in probability under $P_{\theta_0}$. Since taking expectation is continuous with respect to total variance distance, we then have \begin{equation} \sqrt{n}(\hat{\theta}_{Bayes}-\hat{\theta}_{MLE}) \to 0 \end{equation} in probability under $P_{\theta_0}$ where $\sqrt{n}(\theta-\hat{\theta}_{MLE}) $ is the posterior mean bayes estimator. Then by Slusky's theorem, we conclude the bayes estimator $\hat{\theta}_{bayes}$ has the same asymptotic distribution as the maximum likelihood estimator $\hat{\theta}_{MLE}$, i.e. \begin{equation} \sqrt{n}(\hat{\theta}_{Bayes}-\theta_0) \to N(0, I_{\theta_0}) \end{equation} in distribution under $P_{\theta_0}$.
Now instead I'm interested in the parameter $\eta = f(\theta)$ with $f$ a continuous function. I want to prove the asymptotic distribution of the bayes estimator $\hat{\eta}_{Bayes}$ is the same as $\hat{\eta}_{MLE}$, that is, \begin{equation} N(0, \nabla f(\theta_0)^T I_{\theta}^{-1}\nabla f(\theta_0)^T). \end{equation}
Follow the same logic above, I need something like \begin{equation} ||P_{\sqrt{n}(\eta-\hat{\eta}_{MLE}) | x^n} - D||_{TV} \to 0 \end{equation} with $D$ a distribution with zero mean. If there is a delta method in strong convergence of measure, then it is immediate from the first equation in this page. But I can't find such thing in the literature.
Does anyone know about this?