currently trying to compute the first two moments of the multivariate distribution. Got an extremely helpful answer to show that $\mathbb{E}[x]=\mathbb{m}$, with $x \sim \mathcal{N}(\mathbf{m},\mathbf{\Sigma})$ in this thread: multivariate normal moment derivation
Now for the variance: (have to show that Var($\mathbf{x}$) = $\mathbf{\Sigma}$
$$ Var(x) = \mathbf{E}[x^2]-\mathbf{E}[x]^2\\ = \mathbf{E}[x^2] - m^2\\ \mathbf{E}[x^2]= \int_{R^d} \mathbf{x}^2\frac{1}{(2\Pi)^{n/2}(det(\mathbf{\Sigma)})^{1/2}}exp \left( -1/2 (\mathbf{x}-\mathbf{m})^T\mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{m})\right)d \mathbf{x}\\ $$
now with $\mathbf{y}=\mathbf{x}-\mathbf{m}$:
$$ \int_{R^d} (\mathbf{y}+\mathbf{m})^2\underbrace{\frac{1}{(2\Pi)^{n/2}(det(\mathbf{\Sigma)})^{1/2}}}_{\text{:=K}}exp \left( -1/2 (\mathbf{y})^T\mathbf{\Sigma}^{-1}(\mathbf{y})\right)d \mathbf{y}\\ =\int_{R^d} (\mathbf{y})^2\frac{1}{\mathbf{K}}exp \left( -1/2 (\mathbf{y})^T\mathbf{\Sigma}^{-1}(\mathbf{y})\right)d \mathbf{y}\\ + 2\mathbf{m}\underbrace{\int_{R^d} \mathbf{y}\frac{1}{\mathbf{K}}exp \left( -1/2 (\mathbf{y})^T\mathbf{\Sigma}^{-1}(\mathbf{y})\right)d \mathbf{y}}_{\text{=0}}\\ +\mathbf{m}^2\underbrace{\int_{R^d}\frac{1}{\mathbf{K}}exp \left( -1/2 (\mathbf{y})^T\mathbf{\Sigma}^{-1}(\mathbf{y})\right)d \mathbf{y}}_{\text{=1}}\\ $$
I think this is more like a dead-end and somehow it would be necessary to move the $\mathbf{\Sigma}$ out of the exponential function.. maybe by normalization? Any help greatly appreciated!!
warm regards noclue
Computations involving multivariate densities can quickly become a mess. Your best bet is to reduce to the case of independent normals by applying the same transformations as you did in deriving $E(\mathbf X)=\mathbf m$.
So define $\mathbf Y:=\mathbf{X-m}$, and set $\mathbf Y=V\mathbf Z$ where we've decomposed the covariance matrix $\mathbf \Sigma$ of $\mathbf X$ into the form $\mathbf\Sigma=V\Lambda V^T$ with $V$ an orthogonal $d\times d$ matrix of eigenvectors and $\Lambda$ the diagonal matrix of corresponding eigenvalues. Then using matrix properties of expectation, $$\begin{align} \operatorname {Var}(\mathbf X) &:= \mathbb E\left[(\mathbf X -\mathbf m)(\mathbf X-\mathbf m)^T\right]\\&=\mathbb E\left[\mathbf Y\mathbf Y^T \right]\\&= \mathbb E\left[(V\mathbf Z)(V\mathbf Z)^T\right]=\mathbb E(V\mathbf Z\mathbf Z^TV^T) = V\mathbb E(\mathbf Z\mathbf Z^T)V^T \end{align}$$ If you can show that $E(\mathbf Z\mathbf Z^T)=\Lambda$, then you are done. You argue this similarly to before, observing that the joint density of $\mathbf Z=(Z_1,\ldots,Z_d)$ is a product of $d$ functions of the form $$f_k(z_k)=c_k\exp(-z_k^2/2\lambda_k),$$ which implies that $Z_1,\ldots,Z_d$ are independent normals with mean $0$ and variance $\lambda_k$.
Note: this line of reasoning yields a more concise proof that $\mathbb E(\mathbf X)=\mathbf m$, since $$\mathbb E(\mathbf X)=\mathbb E(\mathbf Y+\mathbf m)=\mathbb E(\mathbf Y) + \mathbf m = \mathbb E(V\mathbf Z) + \mathbf m =V\mathbb E(\mathbf Z)+\mathbf m = V\mathbf 0 +\mathbf m = \mathbf m ;$$ by using properties of expectation we avoid dealing with integration explicitly.