Solve linear system, with constraint element-wise $|a|=1$

93 Views Asked by At

I have the following equation

$$a=a_0+Mx,$$ where $a$, $a_0$, and $x$ are $N\times 1$ complex vectors, and $M$ is a (non-invertible) $N\times N$ complex matrix.

For a given $a_0$ and $M$, I want to find a vector $x$ such that the absolute value of each element of $a$ is equal to 1.

Is there a nice algebraic way to find such a solution? Or brute force is the only way?

This comes from a wave propagation problem, where some output $b$ is related to some input $a$ via a linear relation $b=ma$, with $m$ a flat matrix (reduces dimension). I want to find an input $a$ that gives me some target output $b_0$. The solutions to this are $a=m^{-1}b_0+(I-m^{-1}m)x$ for any $x$ ($m^{-1}$ is the pseudo inverse). However in my application I do not have control on the amplitude of the input, only the phase. Therefore I want to find a solution $a$ whose elements have all absolute value of 1. I stated the problem above in a simpler form, with $a_0=m^{-1}b_0$, and $M=I-m^{-1}m$.

1

There are 1 best solutions below

1
On BEST ANSWER

This can be expressed as an instance of quadratically constrained quadratic programming, so one approach would be to try to solve it with an off-the-shelf QCQP solver.

In particular, let $x=t+iu$ and $a=b+ic$, where $t,u,b,c$ are $N \times 1$ real vectors. Then you can express $b,c$ as a linear function of $t,u$. Now your equation amounts to the requirement that $b_j^2 + c_j^2 = 1$ for all $j$. These are quadratic constraints on the unknowns $t,u$. Therefore, you can use an off-the-shelf QCQP solver to search for a feasible solution to these constraints. (For instance, define an objective function $\Phi$ that is always zero, and ask the QCQP solver to minimize $\Phi$ subject to these constraints.)

Of course, QCQP is NP-hard, so solvers have running time that is at least exponential in the worst case. So there is probably only hope for success if $N$ is not too large.


I believe your problem is NP-hard, even when all of $a_0,M,x$ are real-valued. Therefore, you shouldn't expect a solution to the general problem that is efficient (e.g., running time polynomial in $N$).

The proof is by reduction from Maximum Independent Set. Suppose we want to test whether there exists an independent set in a graph $G$ of size $k$. Introduce 0-or-1 variables $x_v$, one for each vertex $v$ of $G$. We impose that constraint that $x_v$ is either 0 or 1 by adding the constraint $|2x_v-1|=1$. We impose the constraint that, for each edge $(u,v)$ of $G$, $x_u+x_v\le 1$, which can be expressed by adding the constraint $|2x_u+2x_v-1|=1$. Finally, we impose the constraint that $\sum_v x_v = k$ by imposing the constraint $|C \sum_v x_v - C k + 1|=1$, for some large constant $C$ chosen to be larger than the number of vertices of $G$. Each of these $|\cdots|=1$ constraints can be expressed in your setting with a single row of $a_0,M$. We have $N+N_e+1$ constraints in $N$ variables, where $N$ is the number of vertices and $N_e$ the number of edges, so this creates a $(N+N_e+1) \times N$ matrix. To make the matrix square, we can pad out $x$ with $N_e+1$ additional dummy variables that are ignored, and fill in those columns of $M$ with zeros. In this way, we obtain a system of equations $a=a_0+Mx$, $|a|=1$ that has a solution iff there is an independent set of size $k$ in $G$.

Of course, the maximum independent set problem is NP-hard, so it follows that your problem is NP-hard, too.

Related: https://cs.stackexchange.com/q/82398/755, https://cs.stackexchange.com/a/47110/755, https://cs.stackexchange.com/q/19378/755