I want to find the solution $x$ with most zeros in its components, to:
$Ax=b$ for $A\in \mathbb{R}^{k \times n}, b \in \mathbb{R}^k$ ($k < n$), where $x \in \mathbb{R}^n$ has no additional constraints.
It's a known fact that it's NP-Complete. So I want to find
1) a heuristic or
2) another NP-Complete Problem to reduce the following problem (in order to apply the heuristics known for the second problem and then go back to the original one).
I've been thinking about some intuitive or expectable properties of the sparse solutions but have not yet proved any. So I prefer editing once I am sure about them. I've identified the
subproblems of choosing a subset of
(set of indexes for variables who will be 0) and finding x, i.e.
, and that in finding a sparse solution it is equivalent to fix a component to zero and to put its associate column in A to zeros. Have been thinking this would be an initial step towards finding an appropriate heuristic or reduction (for example, by taking the categories of problems with the same cadrinality of the subset chosen).
I would appreciate some help of your part.
(This would have been a comment, if I had enough reputation, as this is not my field of expertise.)
This goes into a different direction than your initial thoughts, but have you looked at L1-regularized least squares (also called "lasso method")? It adds an artificial constraint $\| x \|_{L^1} \leq \beta$ to the least squares problem. As the unit ball in the L1-norm is a rhombus, sparse solutions, i.e. corner points of the unit ball, are favored. http://en.wikipedia.org/wiki/Least_squares#Lasso_method.