problem with gradient descent method for simple convex optimization

77 Views Asked by At

For my practice, I am trying to solve the following simple convex optimization by using the gradient descent method for the Lagrangian function of the objective function:

Minimize $x^2$

s.t. $x\leq2$

I formulate the Lagrangian as follows

$\mathcal{L}(x,\lambda)=x^2+\lambda(x-2)$, with $\lambda\geq0$

I start gradient method for Lagrangian function with $x=2, \lambda=0.1$, and in every iteration of the gradient descent method I update $x$ and $\lambda$ as follows:

$x=x-t(2x+\lambda)$

$\lambda=\lambda-t(x-2)$; if $\lambda$ gets negative, I change it to $\lambda=0$

where $t$ is the step size. I have tried using exact line search and also backtracking line search in every iteration to find the $t$ and used it in the equations. But I don't get the convergence and the result at all. I am wondering where I am doing wrong.