Reducing an SDP to an SOCP

170 Views Asked by At

Consider a linear estimation setting where we have measurements of the following form.

$$ {\bf y} = {\bf H} {\bf x} + {\bf v} $$

where ${\bf y}, {\bf v} \in \mathbb{R}^m$, ${\bf x} \in \mathbb{R}^n$ and ${\bf H} \in \mathbb{R}^{m \times n}$. An overdetermined system ($m>n$) is assumed. Minimizing the trace of the estimation error covariance by selecting a set of $k$ measurements from the original set,

\begin{equation} \begin{aligned} \underset{\mu_1, \mu_2, \dots, \mu_m}{\mbox{minimize}} & \hspace{0.5em} \operatorname{Tr} \Bigg(\Bigg(\sum_{i=1}^m \mu_i {\bf h}_i {\bf h}_i^\top\Bigg)^{-1}\Bigg)\\ \mbox{subject to} & \hspace{0.5em} \sum_{i=1}^m \mu_i = k, \\ & \hspace{0.5em} 0 \leq \mu_i \leq 1, \quad i = 1,\dots,m, \end{aligned} \end{equation}

where ${\bf h}_i^\top \in \mathbb{R}^n$ is the $i$-th row of $\bf H$. A variant of this problem is discussed in more detail on Chapter 7 of Boyd & Vandenberghe (page 387). The problem can be cast as a semidefinite program (SDP) of the following form.

\begin{equation} \begin{aligned} \underset{\mu_1, \mu_2, \dots, \mu_m,\\ u_1,u_2,\dots,u_n}{\mbox{minimize}} & \hspace{0.5em} \sum\limits_{j=1}^nu_j\\ \mbox{subject to} & \hspace{0.5em} \begin{bmatrix} \sum\limits_{i=1}^m \mu_i {\bf{h}}_i{\bf{h}}_i^\top & {\bf{e}}_j \\ {\bf{e}}_j^\top & u_j \end{bmatrix} \succeq 0, \quad j = 1,\dots, n, \\ & \hspace{0.5em} \sum\limits_{i=1}^m \mu_i = k, \\ & \hspace{0.5em} 0 \leq \mu_i \leq 1, \quad i = 1,\dots,m, \end{aligned} \end{equation} where ${\bf u} = [u_1,\dots,u_n]^\top \in \mathbb{R}^n$ and ${\bf e}_j$ is the $j$-th unit vector.

Is there a way to reduce this SDP to a second-order cone program (SOCP)? Some hints or starting points would be appreciated.