Total variation distance between 2 probabilities

115 Views Asked by At

I am reading Diaconis, P. (2009). The markov chain monte carlo revolution.

Section 2.3 Convergence defines the total variation distance between 2 probabilities (fixed $x$):

$$\lVert K^n_x-\pi \rVert_{TV}=\frac{1}{2}\sum_y \lvert K^n(x,y)-\pi(y)\rvert = \max_{A\subseteq \mathcal{X}} \;\lvert K^n(x,A)-\pi(A)\rvert$$

I can't fit the last equality in my head. Why should it be true (for countable sets)? I've done some numerical tests and it seems indeed right :)

How to prove it?


Related to: