Recently I've been curious about the following question. Suppose that $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is a nonlinear convex function and we seek to minimize it by Newton's method. That is we generate a sequence $\lbrace x^{(k)}\rbrace_{k=1}^{\infty}$ from an initial guess $x^{(0)}\in\mathbb{R}^{n}$ via Newton's method that is for $k=0,1,2,\dots$
\begin{align} H^{(k)}\hat{x}^{(k)}=-g^{(k)} \newline x^{(k+1)}=x^{(k)}+\hat{x}^{(k)} \end{align}
where $H^{(k)}$ is the Hessian of $f$ evaluated at $x^{(k)}$ and $g^{(k)}$ is the gradient of $f$ evaluated at $x^{(k)}$.
My question is, in a computational setting where we are subject to the effects of finite-precision arithmetic, can we say anything about the limits of how small we can make $||g^{(k)}||_{2}$, based on the condition number of $H^{(k)}$?