Problem: Lets denote ${Z}_{i\times i}$ as the $i^{th}$ leading principal submatrix of $Z$. Solve the following non-convex problem:\begin{array}{ll} \text{minimize} & \mathrm{tr}(XX^T)\\\quad X\in\mathbb{R}^{3\times3}\\ \text{subject to} & \mathrm{det}(I+XX^T)\geq a_1\\&\frac{\mathrm{det}(I+XX^T)}{\{I+XX^T\}_{1\times1}}\geq a_2\\&\frac{\mathrm{det}(I+XX^T)}{\mathrm{det}(\{I+XX^T\}_{2\times2})}\geq a_3\end{array} where $a_1\geq a_2\geq a_3>1$.
My attempt: Let $Y=XX^T$, then
\begin{array}{ll} \text{minimize} & \mathrm{tr}(Y)\\\quad Y\in\mathbb{R}^{3\times3}\\ \text{subject to} & \mathrm{logdet}(I+Y)\geq \mathrm{log}a_1\\&\mathrm{log}\frac{\mathrm{det}(I+Y)}{\{I+Y\}_{1\times1}}\geq \mathrm{log}a_2\\&\mathrm{log}\frac{\mathrm{det}(I+Y)}{\mathrm{det}(\{I+Y\}_{2\times2})}\geq \mathrm{log}a_3\end{array}
Lagrange multiplier:
$L(Y,\lambda)=\mathrm{tr}(Y)+\lambda_1(\mathrm{log}(a_1)-\mathrm{logdet}(I+Y))+\lambda_2(\mathrm{log}(a_2)-\mathrm{logdet}(I+Y)+\mathrm{log}(\{I+Y\}_{1\times1}))+\lambda_3(\mathrm{log}(a_3)-\mathrm{logdet}(I+Y)+\mathrm{logdet}(\{I+Y\}_{2\times2})).$
$\frac{\partial L(Y,\lambda)}{\partial Y}=I-(\lambda_1+\lambda_2+\lambda_3)(I+Y)^{-1}+\lambda_2\begin{bmatrix}(\{I+Y\}_{1\times1})^{-1} & 0 & 0\\0 & 0 &0 \\0 & 0 &0\end{bmatrix}+\lambda_3\begin{bmatrix} (\{I+Y\}_{2\times2})^{-1}& 0 \\0 & 0 \end{bmatrix}=0$
From last equation we can see that $(I+Y)^{-1}=\begin{bmatrix}* & * & 0\\* & * &0 \\0 & 0 &*\end{bmatrix}$, thus $Y=\begin{bmatrix}* & * & 0\\* & * &0 \\0 & 0 &*\end{bmatrix}$. Now we set $y_{33}=a_3-1$ and solve the problem for $Y\in\mathbb{R}^{2\times 2}$, and find out that optimal $Y$ is diagonal with diagonal elements equal to $a_1-1,a_2-1,a_3-1$. Lastly, $X=Y^{1/2}.$
I think there is a mistake somewhere, can you please help me to find a mistake or show another way to solve it.
Counter-example: \begin{array}{ll} \text{minimize} & \mathrm{tr}(Y)\\\quad Y\in\mathbb{R}^{2\times2}\\ \text{subject to} & \mathrm{det}(I+Y)\geq 144\\&\frac{\mathrm{det}(I+Y)}{\{I+Y\}_{1\times1}}\geq 4\end{array}
If we use my solution, then $Y=\begin{bmatrix} 143& 0 \\0 & 3 \end{bmatrix}$, and $\mathrm{tr}(Y)=146.$
But let $Y=\begin{bmatrix} 37.7059& \sqrt{1343.3} \\\sqrt{1343.3} & 37.7059 \end{bmatrix}$, then $\mathrm{tr}(Y)=75.4118,$ $\mathrm{det}(I+Y)=154.8467$, $\frac{\mathrm{det}(I+Y)}{\{I+Y\}_{1\times1}}=\frac{154.8467}{1+37.7059}=4.0006.$
That's not totally true. Let $Z =(I+Y)^{-1}$, then $\frac{\partial L(Y,\lambda)}{\partial Y}=0$ is the equivalent of $$\begin{bmatrix} 1 - \lambda_1 Z_{1,1} & -(\lambda_1 + \lambda_2)Z_{1,2} & 0 \\ -(\lambda_1 + \lambda_2)Z_{1,2} & 1 - (\lambda_1+\lambda_2) Z_{2,2} & 0 \\ 0 & 0 & 1 - (\lambda_1+\lambda_2+\lambda_3) Z_{3,3} \end{bmatrix}=0$$ which means $$Z_{1,2} = 0$$ hence diagonal and $$Z = \begin{bmatrix} \frac{1}{\lambda_1} & 0 & 0\\0 &\frac{1}{\lambda_1+\lambda_2} & 0 \\ 0 &0 & \frac{1}{\lambda_1+\lambda_2+\lambda_3} \end{bmatrix} = (I+Y)^{-1}$$ So $$Y^* = \begin{bmatrix} \lambda_1 - 1 & 0 & 0\\0 &\lambda_1 +\lambda_2 - 1 & 0 \\ 0 &0 & \lambda_1 +\lambda_2+\lambda_3 - 1 \end{bmatrix}$$ Replacing this optimal $Y$ in the Lagrange, we get the dual function $$g(\lambda) = \inf_{Y} L(Y,\lambda) = L(Y^*, \lambda) $$ After some manipulations, we arrive at $$g(\lambda) = \lambda_1(3 + \log a_1 ) + \lambda_2(2 + \log a_2) + \lambda_3(1 + \log a_3) -\log \lambda_1^{\lambda_1} -\log (\lambda_1+\lambda_2)^{\lambda_1+\lambda_2}-\log (\lambda_1+\lambda_2+\lambda_3)^{\lambda_1+\lambda_2+\lambda_3}$$ Your optimal lagrangian multipliers are obtained through solving the convex dual problem, i.e. \begin{array}{ll} \text{maximize} & g(\lambda) \\\quad \lambda\\ \text{subject to} & \lambda_1,\lambda_2,\lambda_3 \geq 0 \end{array} Deriving w.r.t $\lambda_1,\lambda_2,\lambda_3$ we get the following "first order conditions", \begin{align} \log(\beta_1) + \log(\beta_2) + \log(\beta_3) &= \log (a_1)\\ \log(\beta_2) + \log(\beta_3) &= \log (a_2)\\ \log(\beta_3) &= \log (a_3) \end{align} where $\beta_k =\sum_{n=1}^k \lambda_n$. The solution is easy to obtain by back-substitution, namely \begin{align} \lambda_1^* &= \frac{a_1}{a_2} \\ \lambda_1^* + \lambda_2^* &= \frac{a_2}{a_3} \\ \lambda_1^* + \lambda_2^* + \lambda_3^* &= a_3 \end{align} That said, your $Y^*$ should look something like this $$Y^* = \begin{bmatrix} \frac{a_1}{a_2} - 1 & 0 & 0\\0 & \frac{a_2}{a_3} - 1 & 0 \\ 0 &0 & a_3 - 1 \end{bmatrix} \neq \begin{bmatrix} a_1 - 1 & 0 & 0\\0 & a_2 - 1 & 0 \\ 0 &0 & a_3 - 1 \end{bmatrix}$$
Applying to your counter example The $2 \times 2$ looks like $$Y^* = \begin{bmatrix} \frac{a_1}{a_2} - 1 & 0 \\0 & a_2 - 1 \end{bmatrix}= \begin{bmatrix} 35 & 0 \\0 & 3 \end{bmatrix}$$ Guess what ? You attain a minimum trace with strict equality on the constraints :), i.e.
\begin{align} \text{tr}(Y) &= 38 \\ \det( I+Y) &= 144 \\ \frac{\mathrm{det}(I+Y)}{\{I+Y\}_{1\times1}}&=\frac{144}{1+35}=4 \end{align} This could be explained through the complementary slackness of the problem. Roughly speaking, if your lagrange multipliers are non-zero (which is the case), that means the inequality constraints are made tightest possible (equality).