Can we infer convergence in total variation distance from a Poincaré inequality?

50 Views Asked by At

Let $(E,\mathcal E,\mu)$ be a probability space, $\lambda>0$ and $\kappa_t$ be a Markov kernel on $(E,\mathcal E)$ with$^1$ $$\operatorname{Var}_\mu\left[\kappa_tf\right]\le\operatorname{Var}_\mu\left[f\right]e^{-2\lambda t}\;\;\;\text{for all }f\in L^2(\mu)\tag1$$ for $t\ge0$.

Assume $\mu$ is invariant with respect to $\kappa_t$, i.e.$^2$ $\mu\kappa_t=\mu$, for all $t\ge0$. We are able to conclude $$\left\|(\kappa_t-\mu)f\right\|_{L^2(\mu)}\xrightarrow{t\to\infty}0\tag2$$ from $(1)$. Are we able to infer any other mode of convergence of $\kappa_t$ to $\mu$ as $t\to\infty$?

For example, I would like to show convergence in total variation distance, i.e. if $\nu$ is a probability measure on $(E,\mathcal E)$, then $$|\nu\kappa_t-\mu|\xrightarrow{t\to\infty}0\tag3$$ (where $|\operatorname P|$ denotes the total variation distance of a probability measure $\operatorname P$ on $(E,\mathcal E)$). I'm willing to assume that $\mu$ is reversible with respect to $\kappa_t$, i.e. $$\int_A\mu({\rm d}x)\kappa_t(x,B)=\int_B\mu({\rm d}x)\kappa_t(x,A)\;\;\;\text{for all }A,B\in\mathcal E\tag4,$$ for all $t\ge0$. Moreover, if necessary, assume that $\mu$ and $\nu$ admit a density with respect to a common reference measure $\lambda$ on $(E,\mathcal E)$.


$^1$ As usual, $\kappa_tf:=\int\kappa_t(\;\cdot\;,{\rm d}y)f(y)$ and, analogously, $\mu f:=\int f\:{\rm d}\mu$.

$^2$ $\mu\kappa_t$ denotes the composition of $\mu$ and $\kappa_t$.