Normalisation of random draws to $[0,1]^2$?

54 Views Asked by At

I have an $n\times 1$ vector $Y$ of numbers generated by drawing at random from the normal distribution $N(\mu,\sigma^2)$ [e.g. by using the function normrnd in Matlab].

In other words, for $n$ large, the histogram approximates well the density function of a random variable $X\sim N(\mu,\sigma^2)$.

Is there a way to "rescale" each number contained in $Y$ to the interval $[0,1]$ in a way that "preserves" the original properties of $Y$?

For example, let $y_{max}$ and $y_{min}$ be respectively the maximum and minimum number contained in $Y$. Let $y_i$ be a number contained in $Y$. Do you think that $\frac{y_i-y_{min}}{y_{max}-y_{min}}$ could make the job?


I am reformulating my question in a clearer and more general way thanks to the comments I received below:

I have an $n\times 2$ matrix $Y$ of numbers generated by drawing at random from the bivariate normal distribution $$ N\Big(\begin{pmatrix} \mu_1\\ \mu_2 \end{pmatrix}, \begin{pmatrix} \sigma_1^2 & \rho \sigma_{1} \sigma_2\\ \rho \sigma_{1} \sigma_2 & \sigma^2_2\\ \end{pmatrix} \Big) $$ [e.g. by using the function normrnd in Matlab].

In other words, for $n$ large, the histogram approximates well the density function of a random vector $X\sim N\Big(\begin{pmatrix} \mu_1\\ \mu_2 \end{pmatrix}, \begin{pmatrix} \sigma_1^2 & \rho \sigma_{1} \sigma_2\\ \rho \sigma_{1} \sigma_2 & \sigma^2_2\\ \end{pmatrix} \Big)$.

Is there a way to "rescale" the numbers in $Y$ to the square $[0,1]^2$ in a way that "preserves" the original properties of $Y$? My ultimate objective is comparing the draws from the bivariate normal with the draws from a uniform distribution in $[0,1]^2$ via Kolmogorov-Smirnov.

For example, let $Y_{j,max}$ and $Y_{j,min}$ be respectively the maximum and minimum numbers contained in the $j$th column of $Y$. Let $Y_{i,j}$ be the $ij$th element in $Y$. Do you think that $\frac{Y_{i,j}-Y_{j,min}}{Y_{j,max}-Y_{j,min}}$ could make the job for $j=1,2$?

From the comments below it seems that it doesn't due to variation in $Y_{j,max}$ and $Y_{j,min}$. Any other option suitable for a bivariate case would be extremely appreciated. One suggestion below was the application of a sort of 68–95–99.7 rule, but how can I generalise it to a bivariate normal with correlated random variables?