Minimizing Quadratic Form with Norm and Positive Orthant Constraints

1.3k Views Asked by At

Let $ M $ be a positive semi definite matrix.

I want to solve $$ \arg \min_{x} {x}^{T} M x \quad \mathrm{s.t.} \quad \left\| x \right\| = 1, \ x \succeq 0 $$ where $ x \succeq 0 $ means each coordinate of $x$ is nonnegative.

Is there a standard approach for attacking this problem?
Without the inequality constraint, the usual approach is to expand $ x $ in the eigenvector basis of $ M $ and to notice that the solution must be the eigenvector with least eigenvalue.
Without the equality constraint, obviously the solution is $ x = \boldsymbol{0} $.

1

There are 1 best solutions below

9
On BEST ANSWER

I would use the Projected Gradient Descend for this case.
Though the problem isn't Convex it will work nicely.

The algorithm is as following:

  1. Calculate the Gradient at the current point.
  2. Update the solution $ x = x - 2 t M x $ where $ 2 M x $ is the Gradient of the Objective Function and $ t $ is the step size.
  3. Project the output of previous step into $ {\mathbb{R}}_{+} $ by $ {x}_{i} = \max \left\{ {x}_{i}, 0 \right\} $.
  4. Project the output of previous step onto the Unit Sphere by $ {x}_{i} = \frac{ {x}_{i} }{ \left\| x \right\|_{2} } $.
  5. Go back to (1) (Or check validity of the point, KKT will do even the problem isn't Convex).

In a simple 2D example I created which worked pretty well:

enter image description here

The code is available at my StackExchange Mathematics Q2699867 GitHub Repository.

Remark 001
I'd even considering starting with the solution of the Convex Problem when you replace the equality constraint $ \left\| x \right\|_{2} = 1 $ by $ \left\| x \right\|_{1} = 1 $ (This will make the problem basically constraining $ x $ to the Unit Simplex). You can either use it as a starting point for the above algorithm or approximated solution by itself.

Remark 002
Another approach might be something like I did in - Solution for $ \arg \min_{ {x}^{T} x = 1} { x}^{T} A x - {c}^{T} x $.
Yet after each iteration of updating $ \lambda $ you should also project the output $ x $ into $ \mathbb{R}_{+} $.