Converting nonlinear matrix inequality to an LMI

244 Views Asked by At

I am fairly new to semidefinite programming (SDP). I have the following semidefinite program in matrix $X$ and vector $p$

$$\begin{array}{ll} \text{minimize} & \|Np\|^2\\ \text{subject to} & M(X,p) \prec 0\\ & X = X^{T} \succ 0\end{array}$$

with

$$M(X,p) := cX + ((A-Bp^{T}N^{T})X)+((A-Bp^{T}N^{T})X)^{T}$$

where $c > 0$ is a scalar. I know that the simplest and straightforward way to solve this problem is recasting $M(X,k)$ with a proper change of variable that leads to an LMI (see this book, in particular by imposing $Bp^{T}N^{T}X=t$ with $t$ a new variable).

In this way I obtain:

Unfortunately in my particular case I can't do that and I would like to recast this matrix inequality which is non-linear into a linear matrix inequality but solving it with respect to $p$ and $X$. Is there a possible way to do that?

Or anyway if the question above have no answers, any different approach to the change of variables that I proposed above (i.e. $Bp^{T}N^{T}X=t$) to recast this inequality into an LMI? Thank you.