On page 43 of Nocedal & Wright's Numerical Optimization, the authors provide the following Theorem 3.4 without any proof:
Suppose that $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is twice continuously diffrentiable and that the iterates generated by the steepest method with exact line searches converge to a point $x^{\ast}$ at which the Hessian matrix $\nabla^2f(x^{\ast})$ is positive definite. Let $r$ be any scalar satisfying $$r\in\left(\frac{\lambda_n-\lambda_1}{\lambda_n+\lambda_1},1\right),$$ where $\lambda_1\leq \lambda_2 \leq ... \leq \lambda_n$ are the eigenvalues of $\nabla^2f(x^{\ast})$. Then for all $k$ sufficiently large, we have $$f(x_{k+1})-f(x^{\ast}) \leq r^2\left(f(x_{k})-f(x^{\ast})\right).$$
Question: I am looking for a formal proof of this theorem.
I understand the results (and the developments that led to them) presented just before in the book which describe the convergence rate of the steepest descent with exact line search in the quadratic case (or strongly convex case): $$\|x_{k+1}-x^{\ast}\|_Q^2\leq \left(\frac{\lambda_n-\lambda_1}{\lambda_n+\lambda_1}\right)^2\|x_{k}-x^{\ast}\|_Q^2.$$ I looked a bit for myself and I figured out that the trick is to use $\nabla^2f(x^{\ast})$, the Hessian of the objective at the solution point, as if it were the $Q$ matrix of a quadratic problem, but I am not able to write a formal proof.
Thanks for the help!
(this question has been crossposted to mathoverflow)