I am trying to use Cholesky decomposition to generate two correlated random numbers by simulating two uncorrelation distributions. The covariance matrix should be $$ C= \left[\begin{matrix} \sigma_1^2 & \rho\cdot\sigma_1\cdot\sigma_2 \\ \rho\cdot\sigma_1\cdot\sigma_2 & \sigma_2^2 \\ \end{matrix}\right] $$ where $\rho$ is the correlation between the two correlated distribution. Let $$ C= LL^T $$, solving the equation, I got: $$ L= \left[\begin{matrix} \sigma_1 & 0 \\ \sigma_2\cdot\rho & \sigma_2\cdot\sqrt{1-\rho^2} \\ \end{matrix}\right] $$ But in practice, we use $$ L= \left[\begin{matrix} 1 & 0 \\ \rho & \sqrt{1-\rho^2} \\ \end{matrix}\right] $$ to generate the two correlated distribution, and it makes sense. Why this happen? In this case, do we just assume $\sigma_1$ and $\sigma_2$ to be 1? but in fact, they are not equal to 1 in my application.
2026-03-25 08:08:33.1774426113
Generating two correlated random numbers: Why does volatility be 1 by using Cholesky decomposition?
197 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CORRELATION
- What is the name of concepts that are used to compare two values?
- Power spectrum of field over an arbitrarily-shaped country
- How to statistically estimate multiple linear coefficients?
- How do I calculate if 2 stocks are negatively correlated?
- A simple question on average correlation
- Two random variables generated with common random varibales
- Correlation of all zero rows and columns in matrix
- Calculating correlation matrix from covariance matrix - r>1
- Joint probability of (X+Z) $\land$ (Y+Z) for arbitrary distribution of X, Y, Z.
- Phase space: Uncorrelated Gaussian Summed With a Linearly Correlated Gaussian
Related Questions in MONTE-CARLO
- Computing admissible integers for the Atanassov-Halton sequence
- Disturbing MATLAB Accuracy in Monte Carlo Simulation
- Choosing a random solution among infinite solutions of a linear system
- How to use Monte Carlo integration on a linear combination of f(x)?
- Monte Carlo Approximation of $\int_0^1\int_0^x x^2y dy dx$
- Give two algorithms for generating a random variable.
- When can the collapsed Gibbs sampling be applied?
- How to solve differential equations (ODE) using Monte Carlo methods?
- Random Numbers - the most common Value of $(x_1^2+y_1^2+...+x_N^2+y_N^2)/N$
- Numerical integration of triple integral
Related Questions in CHOLESKY-DECOMPOSITION
- Why is the square root of Cholesky decomposition equal to the lower triangular matrix?
- A simple proof for the relationship between the eigenvalues of a positive definite matrix and its Cholesky decomposition
- Cholesky of a submatrix
- Checking if matrix $A$ is positive definite via Cholesky decomposition
- Cholesky decomposition of the inverse of a matrix
- Calculating Cholesky decomposition directly via Cholesky of submatrix
- Update Cholesky factorization
- Eigenvalues using cholesky factors
- Cholesky decomposition of tensor product
- Show that $A$ is positive definite via the Cholesky decomposition
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
You may wish to consider this approach, if I understand your question, to help work out issues.
Define the following random variables
$$ x_{1} = \eta_{1} \\ x_{2} = \alpha x_{1} + \eta_{2} \\ $$
with normally distributed uncorrelated noise terms $$ \eta_{1} \sim \mathscr{N}\left(0,\sigma_{1}\right) \\ \eta_{2} \sim \mathscr{N}\left(0,\sigma_{2}\right) \\ $$
and $\alpha$ a constant scaling factor which makes $x_{1}$ and $x_{2}$ linearly correlated. Note that $\alpha$ is not the same as the correlation coefficient, $\rho$.
Compute the entries of the covariance matrix:
$$ E\left[x_{1}x_{1}\right]=E \left[\eta_{1}^{2} \right]=\sigma_{1}^{2} $$
$$ E\left[x_{1}x_{2}\right]=E \left[\eta_{1}\left(\alpha \eta_{1} + \eta_{2}\right)\right] $$
$$ E\left[x_{1}x_{2}\right]=E \left[\alpha \eta_{1}^{2} + \eta_{1}\eta_{2}\right]=\alpha \sigma_{1}^{2} $$
$$ E\left[x_{2}x_{2}\right]=E \left[\left(\alpha^{2} \eta_{1}^{2} + 2\alpha\eta_{1}\eta_{2}+\eta_{2}^{2}\right)\right]=\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2} $$
Now the correlation coefficient
$$ \rho = \frac{E\left[x_{1}x_{2}\right]}{\sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]}}=\frac{\alpha \sigma_{1}^{2}}{\sqrt{\sigma_{1}^{2}\left(\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2}\right)}}=\frac{\alpha \sigma_{1}}{\sqrt{\left(\alpha^{2}\sigma_{1}^{2}+\sigma_{2}^{2}\right)}} $$
The covariance matrix is then
$$ \Sigma= \begin{bmatrix} E\left[x_{1}x_{1}\right] & E\left[x_{1}x_{2}\right] \\ E\left[x_{1}x_{2}\right] & E\left[x_{2}x_{2}\right] \\ \end{bmatrix} $$
Making the substitution for $E\left[x_{1}x_{2}\right]$ one obtains:
$$ \Sigma= \begin{bmatrix} E\left[x_{1}x_{1}\right] & \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} \\ \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} & E\left[x_{2}x_{2}\right] \\ \end{bmatrix} $$
With the provided decomposition for $L$:
$$ L= \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ \rho \sqrt{E\left[x_{2}x_{2}\right]} & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} $$
Comparing to the original model for the random variables, we have:
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \alpha & 1 \\ \end{bmatrix} \begin{bmatrix} \eta_{1} \\ \eta_{2} \\ \end{bmatrix} $$
Using $ E\left[x_{1}x_{2}\right]=E \left[\alpha \eta_{1}^{2} + \eta_{1}\eta_{2}\right]=\alpha \sigma_{1}^{2} $ and related from above, we have that
$$ \alpha E\left[x_{1}x_{1}\right] = \rho \sqrt{E\left[x_{1}x_{1}\right]E\left[x_{2}x_{2}\right]} $$
which leads to:
$$ \alpha = \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} $$
Rewrite the model in terms of $\eta$ now with unit variances, $\hat{\eta}$, scaled by standard deviations $\sigma_{1}$ and $\sigma_{2}$ via the following matrix expression:
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \alpha & 1 \\ \end{bmatrix} \begin{bmatrix} \sigma_{1} & 0 \\ 0 & \sigma_{2} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$
using the results from the earlier derivations:
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} & 1 \\ \end{bmatrix} \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ 0 & \sqrt{E\left[x_{2}x_{2}\right]-\rho^{2}E\left[x_{2}x_{2}\right]} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ \rho \sqrt{\frac{E\left[x_{2}x_{2}\right]}{E\left[x_{1}x_{1}\right]}} & 1 \\ \end{bmatrix} \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ 0 & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = \begin{bmatrix} \sqrt{E\left[x_{1}x_{1}\right]} & 0 \\ \rho \sqrt{E\left[x_{2}x_{2}\right]} & \sqrt{E\left[x_{2}x_{2}\right]}\sqrt{1-\rho^{2}} \\ \end{bmatrix} \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$
which is
$$ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} = L \begin{bmatrix} \hat{\eta_{1}} \\ \hat{\eta_{2}} \\ \end{bmatrix} $$
Please note: $\sqrt{E\left[x_{2}x_{2}\right]}\ne \sigma_{2}$. In other words, the variance of $\eta_{2}$ (the seoond noise random variable) is not the same as the $\left(2,2\right)$ entry of the covariance matrix $\Sigma$.
I hope this helps.