I would like to known what is the equation of the iso-density surface of a Gaussian mixture distribution.
Is such an iso-density surface a union of ellispoids?
Let's say that this Gaussian mixture is in dimension $d$ and is composed of $G$ clusters whose respective proportions, means and covariance matrices are $\tau_k$, $m_k$ and $\Sigma_k$.
I know that if $G=1$, this distrubtion is Guassian and its iso-density equation is the following: $$ (\mathbf{x}-\mu)^T\Sigma^{-1}(\mathbf{x}-\mu)=-\operatorname{ln}\left(\alpha^2\right). $$ where $\alpha$ is the density.
I cannot manage to find a tractable equivalent result for Gaussian mixture distribution.
For a given multivariate Gaussian distribution, the iso-density locus is an ellipsoid.
Let $\mathbf{x}\in\Bbb{R}^n$ be an $n$-dimensional random vector that is distributed normally around ${\mu}\in\Bbb{R}^n$ with covariance matrix $\Sigma\in\Bbb{S}_{++}^n$, where $\Bbb{S}_{++}^n$ denotes the set of all symmetric positive definite matrices with entries in $\Bbb{R}$. That is, $\mathbf{x}\sim\mathcal{N}(\mu,\Sigma)$. The probability density function (pdf) of $\mathbf{x}$ is given by $f\colon\Bbb{R}^n\to\Bbb{R}_+$, $$ f(\mathbf{x})=\frac{1}{(2\pi)^{\frac{n}{2}}\vert\Sigma\vert^{\frac{1}{2}}} \exp \Bigg( -\frac{1}{2}(\mathbf{x}-\mu)^\top\Sigma^{-1}(\mathbf{x}-\mu) \Bigg). $$ The locus of all points for which the pdf has a particular value, say $c\in\Bbb{R}_+$, is given by the equation $$ f(\mathbf{x})=c. $$ Now, let the above value ($c$) be equal to a fraction of the maximum density (which appears when $\mathbf{x}=\mu$), i.e., $$ c = a f_{\max} = a f(\mu) = \frac{a}{(2\pi)^{\frac{n}{2}}\vert\Sigma\vert^{\frac{1}{2}}}, $$ where $a$ could be equal to $0.003$, for instance. Then $$ f(\mathbf{x})=c \implies \frac{1}{(2\pi)^{\frac{n}{2}}\vert\Sigma\vert^{\frac{1}{2}}} \exp \Bigg( -\frac{1}{2}(\mathbf{x}-\mu)^\top\Sigma^{-1}(\mathbf{x}-\mu) \Bigg) = \frac{a}{(2\pi)^{\frac{n}{2}}\vert\Sigma\vert^{\frac{1}{2}}}, $$ or $$ \exp \Bigg( -\frac{1}{2}(\mathbf{x}-\mu)^\top\Sigma^{-1}(\mathbf{x}-\mu) \Bigg) =a \implies (\mathbf{x}-\mu)^\top\Sigma^{-1}(\mathbf{x}-\mu) = -2\ln(a), $$ or $$ (\mathbf{x}-\mu)^\top\Sigma^{-1}(\mathbf{x}-\mu) = \ln\Big(\frac{1}{a^2}\Big), $$ which is the equation of an ellipsoid. Furthermore, by performing eigen-decomposition to $\Sigma$, i.e., we have $$ \Sigma = U\Lambda U^\top, $$ where $U$ is an orthonormal $n\times n$ matrix (holding the eigenvectors of $\Sigma$), and $\Lambda = \operatorname{diag}(\lambda_1,\ldots,\lambda_n)$ is an diagonal $n\times n$ matrix consisting of the eigenvalues of $\Sigma$. Then, using the fact that $U$ is orthonormal (i.e., $U^{-1}=U^\top$), we have $$ \Sigma^{-1} = (U^\top)^{-1}\Lambda^{-1}U^{-1} = \Big(U^\top(\mathbf{x}-\mu)\Big)^\top \Lambda^{-1} \Big(U^\top(\mathbf{x}-\mu)\Big). $$ By letting $\mathbf{y}=U^\top(\mathbf{x}-\mu)$, the ellipsoid can be rewritten as $$ \mathbf{y}^\top \Lambda^{-1} \mathbf{y} = \ln\Big(\frac{1}{a^2}\Big) \implies \sum_{i=1}^{n}\Bigg(\frac{y_i}{\sqrt{\lambda_i}}\Bigg)^2 = \ln\Big(\frac{1}{a^2}\Big). $$
You could use the above process for a mixture of Gaussians, as well.