I'm experimenting with the semi-definite optimisation problem given as Program 5.2 in the paper "Optimizing Linear Counting QueriesUnder Differential Privacy", which is as follows:
Given: $\mathbf{W} \in \mathbb{R}^{n \times n}$
Minimize: $u_1 + u_2 + \ldots + u_n$
Subject to: For $i \in [n]$: $\mathbf{e}_i$ is the $n$-dimensional column vector whose $i$th entry is 1 and other entries are 0.
$\begin{bmatrix} \mathbf{X} & \mathbf{e}_i \\ \mathbf{e}_i^t & u_i \end{bmatrix} \succeq 0$;
$(\mathbf{W}^t \mathbf{X} \mathbf{W})_{ii} \leq 1$, $i \in [n]$
I would like to translate this into the standard form used in SDPLIB and the SDPA package, which is:
$\begin{eqnarray} \min & c^t x \\ & \sum_{i=1}^m x_i F_i - F_0 & = & X \\ & X & \succeq & 0 \end{eqnarray}$
Where the matrices $X, F_i$ are $n \times n$ and symmetric. Since $u_i, \mathbf{X}$ are the decision variables in the program, I assume that the first step is setting $x = (u_1, u_2, \ldots, u_n, vec(\mathbf{X}))$. From there, I can see how the first constraint(s) of the program can be combined into a block diagonal representation, e.g.
$F_0 = \begin{bmatrix} \begin{matrix} \mathbf{0} & \mathbf{e}_1 \\ \mathbf{e}_1^t & 0 \end{matrix} & \mathbf{0} & \cdots \\ \mathbf{0} & \begin{matrix} \mathbf{0} & \mathbf{e}_2 \\ \mathbf{e}_2^t & 0 \end{matrix} & \cdots \\ \vdots & \vdots & \ddots \end{bmatrix}$
but I can't quite understand how to express $(\mathbf{W}^t \mathbf{X} \mathbf{W})_{ii} \leq 1$ as part of this. Do I have to expand out $\mathbf{W}^t \mathbf{X} \mathbf{W}$ and work out the coefficients of its diagonal, or is there a simpler way? Can I do something with $I_n - \mathbf{W}^t \mathbf{X} \mathbf{W}$, or possibly even $\mathbf{W}^{-1} - \mathbf{W}^t \mathbf{X}$ if I know that $\mathbf{W}$ is full rank? Or am I completely barking up the wrong tree with all of this?