I am learning mirror descent, which basically states that we can solve a maximization problem of $L$ by iteratively solving $$ x^k=\mathop{\arg\max}_{x} \{L(x)-\eta^{-1}D(x,x^{k-1})\}. $$ Here $x$ is specified by a probability distribution and $D$ is the KL-divergence.
What I don't see is the following claim $$ L(x^k)-\eta^{-1}D(x^k,x^{k-1})\ge L(x^*)-\eta^{-1}D(x^*,x^{k-1}){\color{red}{+\eta^{-1} D(x^*,\pi^k)}} $$
It obviously holds if we ignore the red term. I don't really know why it is a direct result of mirror descent.
Any help would be appreciated. Thank you in advance.