Convergence of backtracking and gradient descent.

362 Views Asked by At

I am thinking a bit about the following exercise:

Let $f(x) = x_1^2 + x_2^2$ with dom $f = \{ (x_1,x_2):x_1 > 0 \}$. The optimal value of this problem is $p^* =1$, but it is never attained since $(1,0) \notin $ dom $f$.

Let $S = \{ (x_1,x_2) : f(x_1,x_2) \leq 4 \}$ be a sub-level set (which is not closed) and let $x^{0} = (2,2)$ be the starting point for gradient descent with backtracking line search.

The question is whether this will converge to $p^*$.

Attempt for solution:

Let $S_\epsilon = S \cap \{(x_1,x_2) : x_1 \geq1+\epsilon \} $, which is a closed convex set and contains $x^0$. Optimizing over this with back-tracking line search and gradient descent will give the optimal solution (1+epsilon,0) and $p^*_\epsilon = (1+\epsilon)^2$. Since $\epsilon$ is arbitrary, we can get arbitrary close to $p^*$, so the optimization will converge.

Is this correct or have I missed something? :)