Variance of sine and cosine of a random variable

21.2k Views Asked by At

Suppose $X$ is a random variable drawn from a normal distribution with mean $E$ and variance $V$. How could I calculate variance of $\sin(X)$ and $\cos(X)$?

(I thought the question was simple and tried to do a search, but did not find any good answer.)

What if there is no assumption about the distribution of $X$, and only sample mean and variance are provided?

5

There are 5 best solutions below

11
On BEST ANSWER

What is below is for $\mu=0$ (and variance renamed $\sigma^2$). Then $\mathbb{E}[\sin X]=0$, and you have $$ \operatorname{Var} \sin X = \mathbb{E}[\sin^2 X] = \frac{1}{2}\left(1-\mathbb{E}[\cos 2X]\right) $$ and $$ \mathbb{E}[\cos 2X] = \sum_{k=0}^\infty (-1)^k\frac{2^{2k}}{(2k)!} \mathbb{E}[X^{2k}] = \sum_{k=0}^\infty (-1)^k\frac{2^{2k}}{(2k)!} \sigma^{2k} (2k-1)!! = \sum_{k=0}^\infty (-1)^k \frac{2^{k}\sigma^{2k}}{k!} = e^{-2\sigma^{2}} $$ and therefore $$ \operatorname{Var} \sin X = \boxed{\frac{1-e^{-2\sigma^2}}{2}} $$ You can deal with the variance of $\cos X$ in a similar fashion (but you now have to substract a non-zero $\mathbb{E}[\cos X]^2$), especially recalling that $\mathbb{E}[\cos^2 X] = 1- \mathbb{E}[\sin^2 X]$.


Now, for non-zero mean $\mu$, you have $$ \sin(X-\mu) = \sin X\cos \mu - \cos X\sin\mu $$ (and similarly for $\cos(X-\mu)$) Since $X-\mu$ is a zero-mean Gaussian with variance $\sigma^2$, we have computed the mean and variance of $\sin(X-\mu)$, $\cos(X-\mu)$ already. You can use this with the above trigonometric identities to find those of $\cos X$ and $\sin X$. (it's a bit cumbersome, but not too hard.)


Without knowing anything about the distribution of $X$, I don't think there's much you can do.

0
On

Here is a general formulation using the law of the unconscious statistician that can be applied to other functions too. For specific calculations with $\sin$ and $\cos$ here though, I would say Clement C.'s answer is better!

The mean of $\color{blue}{h(X)}$ (for some function $h$) would be given by the integral $$\mathbb{E}[h(X)]=\int_{-\infty}^{\infty}\color{blue}{h(x)}f_X(x)\, dx,$$ where $f_X$ is the probability density function of $X$.

The second moment would be found similarly as $$\mathbb{E}\left[(h(X))^2\right] = \int_{-\infty}^{\infty}\color{blue}{(h(x)^2)}f_X(x)\, dx.$$

Once you know the first two moments here, you can calculate the variance using $\mathrm{Var}(Z) = \mathbb{E}[Z^2] - (\mathbb{E}[Z])^2$.

Replace $h(x)$ with $\cos x$ for the corresponding expectations for $\cos X$, and similarly with $\sin x$.

If the distribution of $X$ is not known, we cannot generally compute the exact mean and variance of $h(X)$. However, you may want to see this for some approximations that could be used. Some useful ones for you may be that if $X$ has mean $\mu_X$ and variance $\sigma^2_X$, then $$\mathbb{E}[h(X)]\approx h(\mu_X) + \dfrac{h''(\mu_X)}{2}\sigma_X^2$$ and $$\mathrm{Var}(h(X))\approx (h'(\mu_X)^2)\sigma^2_X + \dfrac{1}{2}(h''(\mu_X))^2 \sigma^4_X.$$

0
On

$\cos^2(x) = \frac{\cos(2x)+1}2$, which averages out to $\frac12$. So as the variance of $X$ goes to infinity, the variance of $\cos(X)$ goes to $\frac12$, assuming the distribution of $X$ is "well-behaved". The lower bound is $0$ (the variance can be made arbitrarily small by choosing the variance of $X$ to be small enough), and as @angryavian says, the upper bound is $1$. Since $|\cos(x)| \leq 0$, and the inequality is strict for all but a measure zero set, the variance of $\cos(X)$ is less than the variance of $X$.

1
On

I know this is not the kind answer you are looking for, but you can compute this empirically pretty easily via probabilistic programming. Here is an example with Python and pymc3, taking $E=0.75$ and $V=0.25^2$:

import pymc3 as pm
import numpy as np

with pm.Model() as model:
    x = pm.Normal('x', mu=0.75, sd=0.25)
    y = pm.Deterministic('y', np.sin(x))
    trace = pm.sample(10000)

pm.plot_posterior(trace)
pm.summary(trace)

This snippet will produce a plot showing the distribution of $X$ and $Y=\sin(X)$

posterior

And this table, which shows mean, standard deviation, bounds for the 95% confidence interval, and some diagnostics to make sure the results are reliable (they are):

       mean        sd  mc_error   hpd_2.5  hpd_97.5        n_eff      Rhat
x  0.747098  0.248358  0.003078  0.269450  1.240856  7756.202193  0.999998
y  0.658854  0.178794  0.002208  0.322414  0.980199  7731.781691  1.000049
0
On

Here I will call the expected value $\mu$ and the variance $\sigma^2.$ \begin{align} \operatorname E(\sin X) = {} & \operatorname E\left( \frac{e^{iX} - e^{-iX}}{2i} \right) \\[8pt] = {} & \operatorname E \left( \frac{e^{i(\mu+\sigma Z)}-e^{-i(\mu+\sigma Z)}}{2i} \right) \\[8pt] = {} & \frac 1 {2i} \left( e^{i\mu} \operatorname E( e^{i\sigma Z}) - e^{-i\mu} \operatorname E(e^{-i\sigma Z}) \right). \end{align}

$$ \text{And } \operatorname E(e^{i\sigma Z}) = \int_{-\infty}^{+\infty} e^{i\sigma z} \frac 1 {\sqrt{2\pi}} e^{-z^2/2} \, dz. \tag 1 $$

The exponent is \begin{align} -\tfrac 1 2 z^2 + i\sigma z = {} & -\tfrac 1 2 \left( z^2 - 2i \sigma z \right) \\[8pt] = {} & -\tfrac 1 2 \left( z^2 - 2i\sigma z - \sigma^2 \right) - \tfrac 1 2 \sigma^2 \\[8pt] = {} & -\tfrac 1 2 \left( z-i\sigma \right)^2 - \tfrac 1 2 \sigma^2 \end{align} The integral on line $(1)$ above becomes $$ e^{-i\sigma^2/2} \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} e^{-(z-i\sigma)^2/2} \, dz. $$ The integral is equal to $1,$ and so the question is: how do we know that?

The integral is equal to $\displaystyle \lim_{A\to\infty} \int_{-A}^{+A} \frac 1 {\sqrt{2\pi}} e^{-(z-i\sigma)^2/2} \, dz.$

So we consider $$ \left( \int_{+A}^{-A} +\int_{-A}^{-A-\sigma i} + \int_{-A-\sigma i}^{+A-\sigma i} + \int_{+A-\sigma i}^{+A} \right) \frac 1 {\sqrt{2\pi}} e^{-z^2/2} \, dz $$ and observe that

  • the first integral above approaches $1$ as $A\to+\infty,$ and
  • the second and fourth integrals are easily seen to approach $0,$ and
  • the third integral is the one whose limit we seek, and
  • the sum of the four integrals is $0$ because we are integrating along a path that returns to its starting point and the function being integrated has no sigularities in the interior of the surrounded region.

We conclude that $\operatorname E(e^{i\sigma Z}) = e^{-i\sigma^2/2}.$ Similarly $\operatorname E(e^{-i\sigma Z}) = e^{+i\sigma^2/2}.$