Suppose $x\in\mathbb{R}^n$ is a random variable with mean $\mu$ and covariance $ \Sigma$. Consider a stochastic convex optimization problem, i.e. an optimization problem with chance constraints, meaning there is a small, but finite probability, $\Delta\leq 0.5$, of violating the constraints.
In all of the cases I've encountered so far, you assume that the constraint space, $\mathcal{X}$, is a polytope, meaning it can be written as
$$ \mathcal{X} \triangleq \bigcap_{j=1}^{M} \ \{x:\alpha_j^\intercal x \leq \beta_j\} $$
Qualitatively, this represents a finite intersection of linear inequality constraints, which is a convex region. In 2D, this is simply a polygon, with $M$ vertices. For example, if $M = 3$, then the intersection of the three lines would form a triangle. If $M = 4$, this would be a square, and so on. The reason people assume the constraint space is a convex polytope is because, using Boole's inequality (which gives an upper bound on the union of sets), the chance constraints can be written as
$$ \begin{align} \text{Pr}(x\notin&\mathcal{X}) \leq \Delta\\ &\Updownarrow\\ \text{Pr}(\alpha_j^\intercal x \leq \beta_j) &\geq 1 - \delta_j, \ \forall j = 1,...,M\\ \sum_{j=1}^{M} \delta_j &\leq \Delta, \end{align} $$
where the joint probability of violating the constraints is split up into the individual probability of violating each $j$th constraint. This is extremely useful, because the second expression is nothing more than the probability of a random variable with mean $\alpha_j^\intercal \mu$ and covariance $\alpha_j^\intercal \Sigma \alpha_j$. Thus, this probability can be written in terms of the standard normal CDF ($\Phi$) as
$$ \Phi\Bigg[\frac{\beta_j - \alpha_j^\intercal \mu}{\sqrt{\alpha_j^\intercal \Sigma \alpha_j}} \Bigg] \geq 1 - \delta_j \Rightarrow \alpha_j^\intercal \mu + \|\Sigma^{1/2} \alpha_j\|^2 \Phi^{-1}(1-\delta_j) \leq \beta_j, $$ since $\Sigma > 0$ is always positive definite, as it represents a standard deviation. The above inequality constraint is a second order cone constraint, and the resulting optimization problem is a SOCP.
However, what if the constraint space is now not an polytope (or polygon), but rather a cone, specifically a convex cone. In that case, $\mathcal{X}$ would be defined as
$$ \mathcal{X} = \{x : \|Ax+b\|_2 \leq c^\intercal x + d\}. $$ Is it possible, in any way, to calculate $\text{Pr}(x\notin\mathcal{X})$, or something like that as in the case of a polytope? You would have to make some kind of approximation or relaxation, such as Markov's inequality or Chebyshev inequality, to get rid of the probability and turn it into an expectation. However, I can't seem to figure out a solution. For my purposes, the cone is centered at the origin, so $b = d = 0$ if that makes it simpler to work with. This type of constraint is more natural in a physical setting, especially in controls, where you want to steer distributions from some initial $x\sim\mathcal{N}(\mu_0,\Sigma_0)$, to the origin for example.
I haven't found any other literature on this subject, so if anyone has any insights, it would be appreciated!
This problem of deriving a tractable approximation to $\text{Pr}(\left\lVert Ax + b \right\rVert \leq c^{\text{T}} x + d) \geq 1 - \Delta$ is known to be computationally intractable in general, see this paper for reference. Section 2.2 of the above paper also provides a tractable approximation to this chance constraint for your setting.
Edit: Section 6.1 of this paper shows that SOCP-based chance constraints with normal random variables may be nonconvex.
Edit 2: Another approach for constructing a convex approximation of the chance-constraint is to use scenario approximation, which can yield some feasibility guarantees. In this approach, you basically sample the random variable many times and replace the chance constraint by the set of sampled random constraints (which are enforced deterministically). If you tune the number of samples, you can get solutions that satisfy the original chance constraint with high probability.