I want to prove the following theorem:
Let $(T_n)_{n\in\mathbb{N}}$ be a sequence of hypothesis testers $T_n:\Omega^n\to \{0,1\}$ between two probability distributions $p$ and $q$ over $\Omega$ (a finite set) such that the error of first kind is asymptotically optimal:
$$p^{\otimes n}(T_n(x_1, \ldots,x_n)=0)\underset{n\to\infty}{\longrightarrow}0$$
Then:
$$\underset{n\to\infty}{\liminf} -\frac{1}{n} \log(E(T_n))\le D(p\,||\,q)$$
where:
- $E(T_n)=q^{\otimes n}(T_n(x_1,\ldots,x_n)=1)$ is the error of second kind.
- $D(p\,||\,q)=\sum_{x\in \Omega} p(x) \log(p(x)/q(x))$ is the Kullback-Leibler divergence between $p$ and $q$.
Moreover, the inequality is tight.
I found related ressources online, but they seem way more sophisticated that what I need. I am looking for some (almost) self-contained proof. What I tried so far:
- Identify the tightness case: I guess a good candidate would be the maximal-likelihood test $T_n(x_1,\ldots,x_n)=1$ if $p^{\otimes n}(x_1,\ldots,x_n)>q^{\otimes n}(x_1,\ldots x_n)$ and 0 otherwise. But I am already unable to prove that it achieves equality in the theorem.
- Forget about the $\liminf$ and try to prove the (stronger) statement $\log E(T_n)\ge -D(p^{\otimes n}\,||\,q^{\otimes n})$
- Try to apply Neyman-Pearson lemma: I get some inequality, but not the one I want.
I would be grateful for any hint!