I'm implementing the algorithm in the following paper:
"Compressive sensing for wideband cognitive radios"
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04218361
However I've run into a problem with my solver for the linear program.
I need to solve a linear program where I minimise the $L_1$ norm of a vector subject to the constraint that the vector, when multiplied by some matrix equals a known set of measurements i.e. $$ \min \lVert x\rVert_{1} \text{ s.t } Ax=y. $$
The difficulty I'm facing is that $A$ is not necessarily a square matrix and so solvers like l1-magic using the primal dual algorithm won't work.
Can anyone suggest an algorithm/solver that will solve this type of convex optimisation problem?
Compressive sensing is all about non-square matrices, as the point is that we are dealing with the "undersampling" regime where we have less measurements than the ambient dimension.
I thought l1-magic works just fine. Did you try it? If you want a generic convex optimization toolbox, you might consider cvx also http://cvxr.com/cvx/.
Also related to compressed sensing, depending on what you want there's a jungle of solvers here (definitely way more than what you want): https://sites.google.com/site/igorcarron2/cs#reconstruction
If you use Cvx:
This minimizes the $l^1$ norm over $Az=b$, and $z$ will hold the minimizer.