I am reading Clarke and Barron 1990, Information-theoretic asymptotics of Bayes methods
In this paper, the setting is parameter $\theta$ estimation, given $X^n$. There is a kind of regularity condition: page 456, condition 3,
The posterior distribution of $\theta$ given $X^n$ asymptotically concentrates on neighborhoods of $\theta_o$ except for $X^n$ in a set of probability of order $o(1/\log n)$, i.e., for every open set $N$ containing $\theta_0$ and every $\delta > 0$, $$P^n[W(N^c|X^n) > \delta] = o(1/\log n)$$ where $W(\cdot|X^n)$ is the posterior distribution of $\theta$ given $X^n$.
My scenario is "observation for observation", instead of each $X_i$, I observe $Y_i$ through some stochastic kernel $p(y|x)$. So assuming $(\theta ,X)$ satisfies the above concentration, can we still say $(\theta, Y)$ satisfy the above concentration?
edit: Suppose $p(Y|X)$ is nontrivial: let's assume the Fisher information about $I_Y(\theta)$ is strictly positive.