$L^p$-norm minimization under linear constraints: Does the optimum depend on $p$?

404 Views Asked by At

Consider the following norm minimization program:

\begin{align} \label{1} &\min_{x \in \mathbb{R}^d} &&\lVert x - x_0 \rVert_p^p &(1)\\ &\text{subject to } &&Ax-b \ge 0 \end{align} Here, $\lVert \cdot \rVert_p$ is the $L^p$-norm, with $p \ge 1.$ Is the following statement true?

If $x^*$ is a solution to (1) for some $p \ge 1$, then $x^*$ is also a solution for any $q \ge 1$.

I guess this boils down to showing that the minimal distance between a hyperplane $X$ and the point $x_0$ is always obtained in the same point $x^* \in X$, regardless of which $L^p$-norm is being used. When I make sketches for the case $d=2$, the statement "looks right", but I could not come up with a proof. Maybe the Hölder inequality can be used in some way?

1

There are 1 best solutions below

2
On BEST ANSWER

Yes, you are right, the statement is not true. You can sketch the simple following example on $\mathbb{R}^2$. Let $x_0=(1,1)^T$ and $$A= \begin{pmatrix} 1 & 0 \\ \frac{2}{5}&\frac{1}{5} \end{pmatrix}. $$ Then it is easy to see for $p=1$, the minimum is attained at $(2,1)^T$, however for $p=2$, the minimum is attained at $(\frac{9}{5},\frac{7}{5})^T$. The reason is when we consider distance, $p=2$ corresponds to a ball, while $p=1$ corresponds to a square.