I have the equation $y=Kx$ where, for example, $x=\begin{pmatrix} x_1 \\ \vdots \\ x_{50} \end{pmatrix}$, where $x_i=0$ or $1$
$K \in M _{1000,50}(\mathbb R)$ a given constant matrix and $$y=\begin{pmatrix} y_1 \\ \vdots \\ y_{1000} \end{pmatrix}$$
My question is, if I was given a value for $y$ how can I find an approximation for the value of $x$?
Choose a measure of closeness, then minimize that measure of closeness subject to x being a zer0 one binary variable.
Common measures of closeness are:
2-norm of
y-K*x, which is (equivalent to) least squares1-norm of
y-K*x, which is the sum of the n absolute differences of the $y_i$ and $(K*x)_i$Infinity norm of
y-K*xwhich is the maximum of then absolute differences of the $y_i$ and $(K*x)_i$For instance, using YALMIP under MATLAB, assuming y is an n by 1 vector of data. Assume p equals one of 2,1, 'inf'
Here is an example with one randomly generated set of data solved with each of p = 2, 1, 'inf'.
Using these instances of K and y, and solving for optimal x gives optimal x transpose for p = 2, i, 'inf' respectively as
In this example, the 2 and 1 norm solutions came out the same, but the inf norm solution is different. Using the optimal 2 and 1 norm solution, the inf norm is 3.9828, but using the optimal inf norm solution, the inf norm is 3.4243, which is better. Using the optimal inf norm solution, the 2 norm is 9.9829, but using the optimal 2 or 1 norm solution, the 2 norm is 7.9897, which is better. Using the optimal inf norm solution, the 1 norm is 41.8925, but using the optimal 2 or 1 norm solution, the 1 norm is 28.5188, which is better. So as you can see, the choice of norm can affect the optimal solution.