Assume that I have a multivariate Gaussian distribution $\mathcal{N}(x;\mu,\Sigma)$ in $D$-dimensional space parameterized by a mean vector $\mu \in \mathbb{R}^D$ and a covariance matrix $\Sigma \in \mathbb{R}^{D \times D}$. Assume further that I have a point $p\in\mathbb{R}^D$. I want to find out whether this point lies within two standard deviations of the multivariate Gaussian distribution given its covariance matrix.
My current idea would be to:
- Evaluate $y=\mathcal{N}(x;\mathbf{0}_D,\mathbf{I}_D)$ at $x=[2,0,...,0]^T$ where $\mathbf{0}_D$ and $\mathbf{I}_D$ are a vector of zeroes and the identity matrix in $D$ dimensions;
- Normalize $y$ by dividing it through the maximum posterior probability density $\bar{y}=y/\mathcal{N}(\mathbf{0}_D;\mathbf{0}_D,\mathbf{I}_D)$
- Then evaluate $\bar{z}=\mathcal{N}(p;\mu,\Sigma)/\mathcal{N}(\mu;\mu,\Sigma)$
- If $\bar{z} \leq \bar{y}$, then $p$ is within two standard deviations of $\mu$
However, the need to evaluate a standard multivariate Gaussian first does seem a bit inelegant. Is there a more elegant way of determining whether a point is within two standard deviations of a Gaussian defined by a vector $\mu$ and a covariance matrix $\Sigma$?