I know how to generate random observation from $N_p(0,I)$ (applying Box-Muller transformation). But I was wondering how to simulate from $N_p(\mu,\Sigma)$(assuming $\Sigma$ to be pd).
I started using spectral decomposition of $\Sigma$ i.e. $\Sigma=\Gamma D \Gamma'$ here $D=Diag(\lambda_1,...,\lambda_p)$. Let $Z\sim N_p(0,I)$. Now if I define $X=\Gamma D^{1/2} Z$ then $X \sim N(0,\Sigma=\Gamma D \Gamma')$
Then I came across that people use Cholesky decomposition i.e. (since $\Sigma$ is p.d.) we can write $\Sigma$ uniquely as $LL'$ where $L$ is a lower triangular matrix. My question is why Cholesky decomposition is preferred to SVD? Thank you for your precious time and help,
you can take any matrix $A$ such that $$ \Sigma = AA^t $$ and take $AZ.$
Cholesky and spectral are both special cases.
Cholesky is easy to compute and computing $AZ$ is faster since you can ignore the upper part.
Spectral is harder to compute but once you have a routine you have it. Spectral is generally preferred for quasi-monte-carlo since it puts greater emphasis on lower columns which causes faster convergence. It is also more robust for matrices that are only just positive definite.
For true MC, both give outputs of the same distribution so it shouldn't matter.