Inequality from numerical optimization. Reminder dominated by the first order term

32 Views Asked by At

There's the following lemma of which I don't get one specific step

If $x^*$ is a local solution of (12.1) then we have $$ \nabla f(x^*)^T d \geq 0 \text{ for all } d \in T_{\Omega}(x^*) $$

Here $T_\Omega(x^*)$ is the tangent cone at $x^*$.

The proof proceeds by contradiction, it is assumed that $\nabla f(x^*)^T d < 0$. Let $\left\{z_k \right\}$ and $\left\{t_k\right\}$ be the sequence satisfying $$ \lim_{k\to\infty} \frac{z_k - x^*}{t_k} = d $$ So we can write $$ f(z_k) = f(x^*) + (z_k - x^*)^T \nabla f(x^*) + o(\left\lVert z_k - x^* \right\rVert) = f(x^*) + t_k d^T \nabla f(x^*) + o(t_k) $$

Here's the bit I don't get, or at least I cannot work out

Since $d^T \nabla f(x^*) < 0$ the reminder term is eventually dominated by the first order term, that is $$ f(z_k) < f(x^*) + \frac{1}{2} d^T \nabla f(x^*) $$

I am not really sure how that inequality is derived.

1

There are 1 best solutions below

2
On BEST ANSWER

The reminder $o(t_k)$ can be written as $t_k \varepsilon(t_k)$ where $\varepsilon$ is some function converging to $0$ as $t_k \to 0$.

So your in your right member you can factorize and obtain $t_k \left( d_k^\top \nabla f(x^*) + \varepsilon(t_k) \right)$. One hand $d_k^\top \nabla f(x^*) <0$, and on the other hand $\varepsilon(t_k) \to 0$, so for $k$ large enough you will have $\varepsilon(t_k)$ small enough so that $d_k^\top \nabla f(x^*) + \varepsilon(t_k) <0$. This concludes the proof.

To get exactly the inequality you want, you can make use of the argument that for $k$ large enough, you can ensure that $\varepsilon(t_k) \leq -(1/2) d_k^\top \nabla f(x^*)$, since the latter is a strictly positive number.