I have a Linear Programming problem of the form min $cx$ s.t. $Ax=b$, $x\geq0$ which I want to solve using projected gradient descent.
I derived the projection operators $P = I - A^T (A * A^T)^{-1} * A$ and $Q = A^T * (A*A^T)^{-1}*b$ required to project a point on the feasible set.
Given an initial guess $x_0$ and step size $\alpha$, my algorithm is as follows:
for $i = 0...$
$x_{i+1} = P(x_i - \alpha * c) + Q$
$x_{i+1} = max(x_i, 0)$
In most cases, the algorithm converges to the optimal value. Some other times it gets stuck in some local minima.
What is the justification for this? Thanks in advance!