Background
I am working on a stochastic optimal control problem and $x$ represents the state of the dynamic system. The optimal state must be found, but, due to the stochastic nature of the problem, the state variables are random variables. So, I am assuming that they can be approximated by a multivariate normal distribution $x \sim \mathcal{N}(\mu, P)$ and using the first two moments of the distribution (mean $\mu$ and covariance $P$) to set up a deterministic optimization problem.
System Dynamics
The state is subject to linear time-varying stochastic dynamics $$ dx = (A(t) x + B(t) u) dt + G dw $$ where $u$ is the control (another optimization variable) and $w$ is a Wiener process with diffusion coefficient $G$. The control is assumed to be composed of a feedforward signal $\nu(t)$ and a state feedback that compensates deviations from the mean. $$ u = \nu + K (x - \mu) $$ Both $\nu$ and $K$ must be optimized.
Rather than considering the stochastic differential equation, one can deal with the ODEs for the mean and the covariance of state \begin{align} \dot{\mu} &= A x + B \nu \\ \dot{P} &= (A + BK) P + P (A + BK)^T + G G^T \end{align} By introducing a change of variables $Z = K P$ and optimizing $Z$ instead of $K$, the second ODE is linear in $P$ and $Z$ $$ \dot{P} = A P + P A^T + B Z + Z^T B^T + G G^T $$
Finally, the differential constraints are converted into algebraic constraints via a collocation method (e.g., trapezoidal).
Chance Constraints
The problem features some linear chance constraints that look like $$ \text{Pr}\left\{ c^T x \leq d \right\} \geq p $$ where $x$ is a vector of $n$ (random) variables to optimize, $c$ is a given coefficient vector of length $n$, $d$ is a given scalar, and $p$ represents the desired probability.
Rather than considering $c^T x$, it is convenient to define a random variable with a standard normal distribution $y \sim \mathcal{N}(0, 1)$. Indeed, according to the central limit theorem, the sum of multiple random variables tends to a normal distribution. Such a variable can be related to $c^T x$ by subtracting the mean and normalizing with respect to the covariance \begin{equation} y = \frac{c^T x - c^T \mu}{\sqrt{c^T P c}} \end{equation} So, the chance constraint can be expressed as \begin{equation} \text{Pr}\{c^T x \leq d\} = \text{Pr}\left\{y \leq \frac{d - c^T \mu}{\sqrt{c^T P c}}\right\} = \Phi\left(\frac{d - c^T \mu}{\sqrt{c^T P c}}\right) \geq p \end{equation} where $\Phi$ is the cdf of the standard normal distribution. By considering the left-side $p$-quantile, denoted $z_p$, of the standard normal distribution, the constraint becomes \begin{equation} \frac{d - c^T \mu}{\sqrt{c^T P c}} \geq z_p \end{equation}
Chance Constraint as a (Convex?) Quadratic Constraint
So, I converted the probabilistic constraint into a deterministic one by replacing $x$ by its mean $\mu$ and covariance matrix $P$ and obtained $$ z_p \sqrt{c^T P c} + c^T \mu - d \leq 0 $$
The above constraint is supposed to be convex since $z_p > 0$ and the covariance matrix $P$ is positive semi-definite (PSD). However, $P$ is an optimization variable and I am having some trouble casting the constraint as a convex quadratic constraint or second-order cone in Gurobi. Gurobi accepts only the three following families of convex quadratic constraints:
- $x^T Q x + q x \leq b$ where $Q$ is PSD (Convex quadratic constraint)
- $ x^T x \leq y^2 $ where $y \geq 0$ (Second-order cone constraint)
- $ x^T x \leq y z $ where $y \geq 0$ and $z \geq 0$ (Rotated second-order cone constraint)
I have tried to introduce an auxiliary variable $s \geq 0$ as $$z_p \sqrt{c^T P c} \leq s \leq - c^T \mu + d$$ and then replace the constraint by the following pair of constraints $$-c^T \mu + d = s $$ $$z_p^2 c^T P c \leq s^2 $$ but the second one belongs to family 1 since the left-hand side is linear in P and is a nonconvex constraint because $s^2$ is on the right side of the equation, thus $Q = - I$.
A standard approach would be decomposing the PSD covariance matrix as $P = R^T R$ and then obtain $$ z_p || R c || \leq - c^T \mu + d $$ which is a second-order constraint, but I don't know how to obtain an expression for $R$ since the elements of $P$ are optimization variables and, even after introducing $R$ as additional variables, then a constraint as $P = R^T R$ would be nonconvex.
So, how should I cast the quadratic constraint and still retain a convex formulation if my optimization variables are the vector $\mu$ and the PSD matrix $P$?