Newton algorithm not finding minimum of $100(y-x^2)^2 + (1-x^2)$

81 Views Asked by At

Edit: Solved. I just forgot to update the inverse of the Hessian. I have updated the code now in case someone needs it.


I am implementing some optimization algorithms (steepest descent, newton's method, CGM, etc). It works fine for all the functions I tried, but the one in the title:

$$100(y-x^2)^2 + (1-x^2)$$

However, I used wolfram alpha and it says there's no global minima, but in the solutions sheet I have, the minima exists and the methods are supposed to find it (starting at ($-1.2,1$), newton's method is supposed to find it in 5 iterations).

Can you shed some light in this? Thank you!

Edit: My newton algorithm (working for other functions) is this:

def newton_method(x,f_x,gradient_f_x,hessian_f_x):
try:
    inverse = np.linalg.inv(hessian_f_x(x))
except np.linalg.LinAlgError:
    print('Hessian is non invertible')
    pass
else:
    iters = 0
    f_values = []
    gradient_f = gradient_f_x(x)
    f = f_x(x)
    f_values.append(f)
    while (np.sqrt(gradient_f.dot(gradient_f))/(1+abs(f))) >= epsilon and iters<=itmax: # condition to achieve optimality
        iters+=1
        p = -np.dot(inverse,gradient_f_x(x)) # search direction
        x = x+p # update of solution point
        gradient_f = gradient_f_x(x)
        inverse = np.linalg.inv(hessian_f_x(x)) # this was missing
        f = f_x(x)
        f_values.append(f)
    return (x,iters,f_values)