It was written in a paper I was reading that it might happen that for a minimization problem that the value of the function cease to decrease, but it is not guaranteed to converge to a stable point.
The objective to be minimized was approximating tensors, and the method used was alternating least squared method. How is that possible?
This depends on the optimization algorithm, if only the gradient of the cost function is taken into account, you can get stuck in a saddle point. For this $\nabla f = 0$, but the solution is not a stable point, Consider the situation
$$ f(x) = x^3 $$
Clearly $df(0)/dx = 0$, but $x=0$ is not a stable point of $f$.