I have been looking at some code for Newton's method. My sources (Nocedal and Wright), indicate that Newton's method works very well in many cases, but that often the method can fail because the Hessian can lose positive semidefiniteness.
I was trying to graphically understand what happens when a Hessian loses positive semidefiniteness in an optimization problem. I understand the definition of positive semidefiniteness (non-negative eigenvalues), but I am not sure what is the mechanism by which the Hessian loses this property, and what the corresponding effect on the optimization trajectory is.