Maximizing linear objective over spectraplex

107 Views Asked by At

I have the following instance of a semidefinite program (SDP):

$$ d + f \to \max $$

$$ a + b + c = 1 $$

$$ \begin{pmatrix} a & d & e\\ d & b & f\\ e & f & c \end{pmatrix} \succeq 0 $$

If I use Sylvester's criterion for this symmetric matrix, I get the following system:

$$ \begin{cases} a, b, c \geq 0 \\ ab - d^2 \geq 0\\ bc - f^2 \geq 0\\ ac - e^2 \geq 0\\ abc - af^2 - be^2-cd^2 + 2def \geq 0 \end{cases} $$

From the first three inequalites the following evaluation holds

$$ d \leq \sqrt{ab}, \qquad f \leq \sqrt{bc} \implies d + f \leq \sqrt{b}(\sqrt{a} + \sqrt{c}) $$

How can I maximize the right part of this inequality, or should I try to go another way to solve SDP?

I solved this problem in Mathematica and got that the following optimum

enter image description here

2

There are 2 best solutions below

0
On BEST ANSWER

Another way to solve the problem is to start, again, from the formulation

$$\begin{array}{ll} \underset{X}{\text{maximize}} & \langle A, X \rangle\\ \text{subject to} & \mbox{tr} (X) = 1\\ & {X} \succeq 0\end{array}$$

where

$${A} := \frac12 \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}$$ used by Rodrigo de Azevedo.

The difference is that now, we consider the Lagrange multiplier $\mu$ and the dual problem

$$\min_{\mu\in\mathbb{R}}\max_{{X}\succeq0}\{\langle {A},{X}\rangle)+\mu(\langle I,{X}\rangle)-1).$$

We can see that strong duality holds in this case due to Slater's condition (i.e. there is an ${X}\succ0$ such that the problem is feasible) and that the primal is bounded.

The dual is then given by

$$-\mu^*:=\min_{\mu\in\mathbb{R}}-\mu\quad \mathrm{s.t.}\quad {A}+\mu I\preceq0$$

which is the same thing as saying that $-\mu^*$ is the maximum eigenvalue of the matrix ${A}$. By virtue of strong duality, the optimal value of the primal is also the maximum eigenvalue of ${A}$.

0
On

$$\begin{array}{ll} \underset{{\bf X}}{\text{maximize}} & \langle {\bf A}, {\bf X} \rangle\\ \text{subject to} & \mbox{tr} ({\bf X}) = 1\\ & {\bf X} \succeq {\bf O}_3 \end{array}$$

where

$$ {\bf A} := \frac12 \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}$$

Let us look for symmetric positive semidefinite rank-$1$ solutions. Hence, let ${\bf X} =: {\bf y} {\bf y}^\top$ and

$$\begin{array}{ll} \underset{{\bf y} \in \Bbb R^3}{\text{maximize}} & {\bf y}^\top {\bf A} \, {\bf y} \\ \text{subject to} & {\bf y}^\top {\bf y} = 1 \end{array}$$

Thus, $\bf y$ is the eigenvector of $\bf A$ corresponding to its largest eigenvalue.


SymPy code

from sympy import *

A = Rational(1,2) * Matrix([[0,1,0],
                            [1,0,1],
                            [0,1,0]])

# compute (normalized) eigendecomposition 
Q, D = A.diagonalize(normalize=True)

# compute rank-1 matrix from the eigenvector
# corresponding to the largest eigenvalue
X = Q[:,2] * Q[:,2].T

print("X =", X        )
print("  =", X.evalf())

outputs

X = Matrix([[      1/4, sqrt(2)/4,       1/4], 
            [sqrt(2)/4,       1/2, sqrt(2)/4], 
            [      1/4, sqrt(2)/4,       1/4]])
  = Matrix([[0.250000000000000, 0.353553390593274, 0.250000000000000], 
            [0.353553390593274, 0.500000000000000, 0.353553390593274], 
            [0.250000000000000, 0.353553390593274, 0.250000000000000]])

which is the matrix obtained by the OP.


Addendum

Why look for rank-$1$ solutions? Let us look for symmetric positive semidefinite solutions of arbitrary rank, i.e., ${\bf X} =: {\bf Y} {\bf Y}^\top$. Hence,

$$\begin{array}{ll} \underset{{\bf Y} \in \Bbb R^{3 \times 3}}{\text{maximize}} & \mbox{tr} \left( {\bf Y}^\top {\bf A} \, {\bf Y} \right)\\ \text{subject to} & \mbox{tr} \left( {\bf Y}^\top {\bf Y} \right) = 1 \end{array}$$

Since, by construction, matrix $\bf A$ is symmetric, its eigenvalues are real. Let ${\bf y}_i$ denote the $i$-th column of matrix $\bf Y$. Note that

$$ \begin{aligned} \mbox{tr} \left( {\bf Y}^\top {\bf A} \, {\bf Y} \right) &= {\bf y}_1^\top {\bf A} \, {\bf y}_1 + {\bf y}_2^\top {\bf A} \, {\bf y}_2 + {\bf y}_3^\top {\bf A} \, {\bf y}_3 \\ &\leq \lambda_{\max} ({\bf A}) \underbrace{\left( \| {\bf y}_1 \|_2^2 + \| {\bf y}_2 \|_2^2 + \| {\bf y}_3 \|_2^2 \right)}_{=1} \end{aligned} = \lambda_{\max} ({\bf A}) $$

Since matrix $\bf A$ has $3$ (orthogonal) $1$-dimensional eigenspaces, the eigenspace corresponding to eigenvalue $\lambda_{\max} ({\bf A})$ is $1$-dimensional. Thus, all columns of a maximizing matrix $\bf Y$ live in such $1$-dimensional eigenspace, i.e.,

$$ {\bf Y}_{\max} = {\bf u} {\bf w}^\top $$

where $\bf u$ is a unit eigenvector in the eigenspace corresponding to eigenvalue $\lambda_{\max} ({\bf A})$, and $\bf w$ is a weight vector. From the constraint $\mbox{tr} \left( {\bf Y}^\top {\bf Y} \right) = 1$, we obtain $\| {\bf w} \|_2 = 1$. Therefore, we can restrict our attention to rank-$1$ matrices, which correspond to choosing signed versions of the standard unit vectors as weight vectors, i.e., ${\bf w} \in \left\{ \pm {\bf e}_1, \pm {\bf e}_2, \pm {\bf e}_3 \right\}$.