I would appreciate some proof verification on the following statement concerning convergence in total variation for Normal Distributions, given convergence of the first and second moments.
I suppose it is true, but a quick google search and search on this site did not bring it up, so this made me doubt.
Question:
We are dealing with the total variation norm, defined as $$||P-Q||_{TV} = \underset{B}{\sup} |P(B) - Q(B)|.$$
Suppose we have sequences of random variables $\mu_n$,$\sigma_n^2$, such that $$\mu_n \overset{d}{\rightarrow} M$$
and $$\sigma_n^2 \overset{\mathbb{P}}{\rightarrow} c ,$$
where $c$ is some constant and $M$ is a random variable. We want to show that $$||\mathcal{N}(\mu_n,\sigma_n^2) - \mathcal{N}(M,c)||_{TV} \overset{\mathbb{P}}{\rightarrow} 0$$
Proof:
We can use Pinsker's inequality, which states that $$ ||P-Q||_{TV} \leq \sqrt{D_{KL}(P||Q)}.$$
The Kullback-Leibler Divergence for the two normal distributions in question is given by
$$ \frac{1}{2} ( \frac{\sigma_n^2}{c} - 1 + \frac{(\mu_n - M)^2}{c} + \ln (\frac{c}{\sigma_n^2}) ,$$
which converges in probability to $0$ by our assumptions (using Slutsky's Lemma, continuos mapping theorem and the fact that weak convergence to a constant implies convergence in probability to a constant). Thus the proof is complete
Additional question: Could one also replace the convergence of the variances by convergence to some random variable? We could then not use Slutsky's theorem any more to get the convergence to $0$ of the last expression.