When solving convex problem, why we don't just find the optimal of the cost function and project it back to the feasible set

46 Views Asked by At

I know that is wrong, because if it is right people would not develop so many algorithms.

But why?

Can I ask for some examples illustrating this does not guarantee optimal?

2

There are 2 best solutions below

0
On BEST ANSWER

Take $f(x) = \|x\|_\infty$ on $\mathbb{R}^2$ and the feasible set $F= \{ \lambda 2 e_1 + (1-\lambda) e_2 \}_\lambda$. Note that $f(\lambda 2 e_1 + (1-\lambda) e_2) = \max(2|\lambda|, |1-\lambda|)$ has a unique minimum at $\lambda= {1 \over 3}$ corresponding to the point ${1 \over 3} (2,2)$, whereas the Euclidean projection of $0$ onto $F$ gives the point ${1 \over 5} (2, 4)$.

The issue is that the Euclidean distance doesn't necessarily match the cost function.

0
On

Consider the cost function $f(x_1,x_2)= x_1^2 + \alpha\cdot x_2^2$, which has minimum at $x_1=x_2=0$.

Consider the problem constrained to lie on the line $a\cdot x_1 + b\cdot x_2 + c=0$.

The projection of the previous solution onto this line is the closest point to the origin, in the Euclidean sense, and thus minimizes $x_1^2+x_2^2$. This point is fixed by the choice of $a,b,c$. However, in general, as we change $\alpha$ the optimum shifts along the line.