Minimizing $\ell^\infty$ norm of complex vector

1.6k Views Asked by At

I have an $n$-dimensional complex vector space, and I want to minimize the $L_\infty$ norm of a point that is constrained to an $m$-dimensional affine subspace. That is,

Given $\mathbf{z} \in \mathbb{C}^n$ and $\mathbf{M}\in\mathbb{C}^{n\times m}$, find $\underset{\mathbf x\in\mathbb{C}^m}{\arg\min} \|\mathbf{z}+\mathbf{M}\mathbf{x}\|_\infty$.

The columns of $\mathbf{M}$ are orthonormal with $\mathbf{M}^*\mathbf{M}=\mathbf{I}$, and $\mathbf{M}^*\mathbf{z} = \mathbf{0}$. In my application, the columns of $\mathbf{M}$ are taken from an $n\times n$ Fourier matrix in such a way that $\mathbf{M}\mathbf{M}^*$ is real symmetric.

One approach is to require $\left|(\mathbf{z}+\mathbf{M}\mathbf{x})_i\right| \le \lambda$ and minimize the upper bound $\lambda$. In the case where all of the variables are real, this gives a linear program. In the problem as posed, we can square both sides and rename the upper bound variable to get $\min\{\Lambda\} \text{ s.t. }\forall i, \left|(\mathbf{z}+\mathbf{M}\mathbf{x})_i\right|^2 \le \lambda^2 \triangleq \Lambda$, which corresponds to a quadratically-constrained linear program. Taking the Lagrange dual leads (if I am not mistaken) to

$\max_u \inf_{\Lambda,\mathbf{x}} \left( \Lambda + \sum_{j=1}^n u_j (\left|(\mathbf{z} + \mathbf{M} \mathbf{x})_j\right|^2 - \Lambda) \right)$ s.t. $u_i \ge 0$,

which evaluates to

$\max_u \sum_{j=1}^n u_j \left|(\mathbf{z} - \mathbf{M} \mathbf{M}^* \mathbf{U}^+ \mathbf{M} \mathbf{M}^* \mathbf{U} \mathbf{z})_j\right|^2$ s.t. $u_i \ge 0, \sum_{i=1}^n u_i = 1$, where $U_{ij}=\delta_{ij} u_i$.

The pseudo-inverse came from solving the expression $-\mathbf{M}^* \mathbf{U} \mathbf{z} = \mathbf{M}^* \mathbf{U} \mathbf{M} \mathbf{x}$ (which arises while infimizing the Lagrange dual function). I think the idea is that $n-m$ of the $u_i$ will be zero, so that the expression becomes feasible. That makes sense from a complementary slackness perspective.

I've tried plowing through the remaining maximization. However, I didn't gain any insight, and I don't see how to deal with the non-differentiability of the pseudo-inverse around $u_i=0$. Is this the point where I should just throw gradient ascent at the dual problem and hope for the best? Or is there a better way? Thanks for any pointers.

1

There are 1 best solutions below

0
On

Thanks for showing interest; too bad nobody has a nice answer! My stop-gap solution was to do gradient descent on the smooth function $\sum_i |(\mathbf{z}+\mathbf{Mx})_i|^p$ for $p=20$. The runtime of this method is not what I would prefer, but it works.

I spoke briefly with Prof. Demanet at MIT, and got a pointer towards proximal iteration techniques for non-smooth optimization. I haven't figured out how to compute the proximal mapping of the constrained complex $L_\infty$ norm, though the unconstrained version (e.g. the solution is zero) has a very elegant mapping.

Another possible solution is second-order cone programming, as suggested in example 2.2.b. of F. Alizadeh, D. Goldfarb, "Cone Programming". I am hoping to learn how well this works when the library finishes compiling.

[Edit -- did not spot the comment by dineshdileep until just now. Thank you!]