I am working through Theorem 3.24. in Nonlinear Optimization Andrzej Ruszczynski, which includes a step I do not understand:
Let $x, x_k \in \mathbb{R}^n$, $x_k \to x$ for $k \to \infty$, and $f: \mathbb{R}^n \to \mathbb{R}$ be a differentiable function.
Then, for a certain sequence $\alpha_k$: $$ f(x_k) - f(x) = \triangledown f(x) \cdot (x_k - x) + \alpha_k $$ where $\frac{\alpha_k}{\lVert x_k - x \rVert} \to 0$, as $k \to \infty$.
It is not clear to me why this is true, even though I can intuitively see why it might be. Is it perhaps some multivariate version of the Taylor expansion involving the directional derivative?