Let's say we have a function $f : \mathbb{R}^n \rightarrow \mathbb{R}$ for which an optimum $\mathbf{x_*}$ is to be found using the Newton method.
The Newton method: $\mathbf{x_{t+1} } = \mathbf{x_t} + \mathbf{H_f(x_t)}^{-1}\nabla \mathbf{f(x_t)}$.
Do we have to calculate the inverse of the Hessian matrix $\mathbf{H_f(x_t)}^{-1}$ only once and directly solve for the optimum or does it have to be recalculated in each iteration?
What about the linear system $\mathbf{H_f(x_0)} (\mathbf{x_*}-\mathbf{x_0}) = \mathbf{\nabla f(x_0)}$? Can this be calculated only once to directly calculate the optimum or do we also have to solve it in each iteration?
If $\mathbf{H}_f(\mathbf{x}_{t+1}) \neq \mathbf{H}_f(\mathbf{x}_t)$, then you must recalculate the Hessian. If the difference is very small, then you might be able to reuse the Hessian (although this is hazardous, since you are using features of the function at one point to inform your search at some other point).
In your last paragraph, it sounds as if you are taking the Hessian to be constant everywhere, so you should be able to reuse it.