What objective do gradient descent and steepest descent maximize when solving linear systems with infinite solutions?

136 Views Asked by At

When we use various local heuristic optimization methods for the solving of linear systems $Ax=b$ (e.g. gradient descent, steepest descent ($L_1$ norm), conjugate gradients), we implicitly are trying to minimize $||Ax-b||^2$ (or maybe some other norm).

When the system has infinite solutions, each of these methods $k$ may arrive at a different satisfying solution $x_k^*$, where $Ax_k^*=b$ $\forall$ $k$.

Presumably, each of these solutions $x_k^*$ maximize a different objective function, with a constraint being $Ax=b$ for each. In other words, $x_k^*$ is the unique solution to

$$ \begin{array}{ll} \text{maximize} & \Omega_k \\ \text{subject to} & Ax=b . \end{array} $$

My question is: for each method $k$ (particularly I'm interested in steepest descent with the $L_1$ norm), how do we determine what objective $\Omega_k$ is maximizing? Is there any theory here?

Update: Agree maximized objective can be a function not just of method $k$ but initial point $x_0$, so what I'm looking for maybe more like how to determine $\Omega_{k,x_0}$.

Update 2: Since the space is convex, can we say that:

Gradient Descent: $\Omega_{k,x_0} = ||x-x_0||_2$

Steepest Descent: $\Omega_{k,x_0} = ||x-x_0||_1$

(and that we are minimizing not maximizing)

?