Generating two correlated random numbers: Why does volatility be 1 by using Cholesky decomposition?

197 Views Asked by At

I am trying to use Cholesky decomposition to generate two correlated random numbers by simulating two uncorrelation distributions. The covariance matrix should be $$ C= \left[\begin{matrix} \sigma_1^2 & \rho\cdot\sigma_1\cdot\sigma_2 \\ \rho\cdot\sigma_1\cdot\sigma_2 & \sigma_2^2 \\ \end{matrix}\right] $$ where $\rho$ is the correlation between the two correlated distribution. Let $$ C= LL^T $$, solving the equation, I got: $$ L= \left[\begin{matrix} \sigma_1 & 0 \\ \sigma_2\cdot\rho & \sigma_2\cdot\sqrt{1-\rho^2} \\ \end{matrix}\right] $$ But in practice, we use $$ L= \left[\begin{matrix} 1 & 0 \\ \rho & \sqrt{1-\rho^2} \\ \end{matrix}\right] $$ to generate the two correlated distribution, and it makes sense. Why this happen? In this case, do we just assume $\sigma_1$ and $\sigma_2$ to be 1? but in fact, they are not equal to 1 in my application.

1

There are 1 best solutions below

0
On

You may wish to consider this approach, if I understand your question, to help work out issues.

Define the following random variables

$$ x_{1} = \eta_{1} \\ x_{2} = \alpha x_{1} + \eta_{2} \\ $$

with normally distributed uncorrelated noise terms $$ \eta_{1} \sim \mathscr{N}\left(0,\sigma_{1}\right) \\ \eta_{2} \sim \mathscr{N}\left(0,\sigma_{2}\right) \\ $$

and $\alpha$ a constant scaling factor which makes $x_{1}$ and $x_{2}$ linearly correlated. Note that $\alpha$ is not the same as the correlation coefficient, $\rho$.

Compute the entries of the covariance matrix:

$$ E\left[x_{1}x_{1}\right]=E \left[\eta_{1}^{2} \right]=\sigma_{1}^{2} $$

$$ E\left[x_{1}x_{2}\right]=E \left[\eta_{1}\left(\alpha \eta_{1} + \eta_{2}\right)\right] $$

$$ E\left[x_{1}x_{2}\right]=E \left[\alpha \eta_{1}^{2} + \eta_{1}\eta_{2}\right]=\alpha \sigma_{1}^{2} $$

$$ E\left[x_{2}x_{2}\right]=E \left[\left(\alpha^{2} \eta_{1}^{2} + 2\alpha\eta_{1}\eta_{2}+\eta_{2}^{2}\right)\right]=\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2} $$

Now the correlation coefficient

$$ \rho = \frac{E\left[x_{1}x_{2}\right]}{\sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]}}=\frac{\alpha \sigma_{1}^{2}}{\sqrt{\sigma_{1}^{2}\left(\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2}\right)}}=\frac{\alpha \sigma_{1}}{\sqrt{\left(\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2}\right)}} $$

The covariance matrix is then

$$ \Sigma= \begin{bmatrix} E\left[x_{1}x_{1}\right] & E\left[x_{1}x_{2}\right] \\ E\left[x_{1}x_{2}\right] & E\left[x_{2}x_{2}\right] \\ \end{bmatrix} $$

Making the substitution for $E\left[x_{1}x_{2}\right]$ one obtains:

$$ \Sigma= \begin{bmatrix} E\left[x_{1}x_{1}\right] & \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} \\ \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} & E\left[x_{2}x_{2}\right] \\ \end{bmatrix} $$

With the provided decomposition for $L$:

$$ L= \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ \rho \sqrt{E\left[x_{2}x_{2}\right]} & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} $$

Comparing to the original model for the random variables, we have:

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \alpha & 1 \\ \end{bmatrix} \begin{bmatrix} \eta_{1} \\ \eta_{2} \\ \end{bmatrix} $$

Using $ E\left[x_{1}x_{2}\right]=E \left[\alpha \eta_{1}^{2} + \eta_{1}\eta_{2}\right]=\alpha \sigma_{1}^{2} $ and related from above, we have that

$$ \alpha E\left[x_{1}x_{1}\right] = \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} $$

which leads to:

$$ \alpha = \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} $$

Rewrite the model in terms of $\eta$ now with unit variances, $\hat{\eta}$, scaled by standard deviations $\sigma_{1}$ and $\sigma_{2}$ via the following matrix expression:

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \alpha & 1 \\ \end{bmatrix} \begin{bmatrix} \sigma_{1} & 0 \\ 0 & \sigma_{2} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$

using the results from the earlier derivations:

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} & 1 \\ \end{bmatrix} \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ 0 & \sqrt{E\left[x_{2}x_{2}\right]-\rho^{2}E\left[x_{2}x_{2}\right]} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} & 1 \\ \end{bmatrix} \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ 0 & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ \rho \sqrt{E\left[x_{2}x_{2}\right]} & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$

which is

$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = L \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$

Please note: $\sqrt{E\left[x_{2}x_{2}\right]}\ne \sigma_{2}$. In other words, the variance of $\eta_{2}$ (the seoond noise random variable) is not the same as the $\left(2,2\right)$ entry of the covariance matrix $\Sigma$.

I hope this helps.