Averaging cosine with a randomly varying argument

398 Views Asked by At

Suppose I have a function $y(t)=\cos (\delta(t))$. Here $y(t)$ and $\delta (t)$ are single valued functions but not necessarily continuous or differentiable. The function $\delta(t)$ is a "random function" (function as in there is exactly one output for every input) which provides an arbitrary values for different values of $t$. It is a single valued "function", but there is no computable relation between its argument$(t)$ and its value $(\delta(t))$.
If I want to average $y(t)$ over a large time interval $T$, how do I do it?
$$y_{avg}=\frac 1T \int_0^T \cos(\delta(t))dt$$ I know intuitively, that since $\delta$ varies randomly and rapidly (as compared to the interval over which it is averaged), the average should be zero as there are as many positive values of $y(t)$ as there are negative, and in the average it all sums to zero. But is there any way to show this mathematically? Or we with what constraints on $\delta(t)$ and $T$, will the average turn out to be zero?(eg. $T\rightarrow \infty$)

$\delta (t)$ takes any value randomly within those bounds. If allowed infinite selections, the output of the function will indeed be uniformly distributed. One of the ways to allow infinite outputs (to uniformly distribute the outputs) is the limit as $T\rightarrow \infty$. My question is how to deal with it if and if not uniformly spread, and what other conditions will allow this random-output function to be uniformly spread?

The physics tag is because this is what happens when two incoherrent light sources with random phase relation (which usually changes in $10^{-8}$ seconds) interfere on a screen. The cosine term is the "interference term" that appears when two waves with the same frequency and wavelength are superimposed.

1

There are 1 best solutions below

0
On BEST ANSWER

Think of chopping up the $t$ axis in preparation for approximating integrals by sums. To do this, let's consider a finite sequence $\{t_n\}_{n=1}^N$ with $$ t_1=0,\\ t_n=t_{n-1}+\Delta t\\ \Delta t = \frac{T}{N-1}, $$ and another sequence that is in between the partition points, $\{t_n^*\}_{n=1}^{N-1}=n \Delta T/2$. Then let's form the Riemann sum approximation to the integral:

$$ y_{avg}\approx\frac{1}{T}\sum_{n=1}^N \cos(\delta(t_n^*))\Delta t $$

Caveat (Edit) I'll admit that this is really dicey; the integrand has no smoothness or continuity, so the Riemann sum has no reason to converge to the integral. However, let's assume $\delta(t)$ is random but with some correlation time to smooth things out. I think the below would still be okay in this case.

By your construction, $\delta(t_n^*)$ is a sequence of random numbers. They way you defined $\delta(t)$, the sequence $\delta(t_n^*)$ can be represented as a collection of independent, identically distributed, uniform random variables $X_n=\delta(t_n^*)\sim U(\alpha_1,\alpha_2)$. We need the distribution of $$Y_n=\cos(X_n).$$ We don't worry exactly about the details of this distribution, but we note it's mean and variance by $$ \mu_Y=\mathbb{E}[Y_n]\\ \sigma_Y^2 = \mathbb{E}[Y_n^2]-\mathbb{E}[Y_n]^2, $$ and note that it's distribution must have support only on $[-1,1]$.

So we have

$$ y_{avg}\approx\frac{1}{T}\sum_{n=1}^N Y_n\Delta t=\frac{N}{N-1}\sum_{n=1}^N \frac{Y_n}{N} $$

The above has a mean of $N$ independent, identically distributed random variables in it, each with finite means and variances, and we're going to take the limit as $N\rightarrow\infty$ to recover our original integral. This means that the central limit theorem applies in the following way $$ Z = \sum_{n=1}^N \frac{Y_n}{N} \overset{C.L.T}{\Rightarrow} Z\sim\mathcal{N}\left(\mu_Y,\frac{\sigma_Y^2}{N}\right) $$

Which leaves us with $$ y_{avg}\approx \frac{N}{N-1}Z $$ Now taking the limit: $$ y_{avg} = \lim_{N\rightarrow\infty} \frac{N}{N-1}Z $$ Clearly, the variance of $Z$ degenerates to zero as $N\rightarrow\infty$, and $\frac{N}{N-1}\rightarrow 1$. I'll admit it's a bit handwavy to so say, and formally you'd have to appeal to the various convergence results in probability theory, but in the limit, $Z$ becomes a distribution with support only on a single number, it's mean $\mu_Y$. That is, $Z=\mu_Y$ in the limit as it's variance vanishes, and so

$$ y_{ave} = \mu_Y = \mathbb{E}[\cos(X_n)], $$

which of course depends on the particulars of the uniform distribution from which you drew the angles $X_n$, i.e. how you defined $\delta(t)$. If you make the quite standard choice that the $X_n\sim U(-\pi,\pi)$, then it can be shown that $Y_n$ has a distribution with probability density function given by $$f_Y(y)=\frac{1}{\pi\sqrt{1-y^2}},\;\;-1\leq y\leq 1$$ with $$ \mu_Y = \int_{-1}^1 \frac{y}{\pi\sqrt{1-y^2}} dy = 0 $$

So $$y_{ave} = \mu_Y = 0$$ in this case. Note the result does not have to be zero, depending on the distribution of $\delta$. For example, suppose that $\delta(t)$ takes values on $(0,\pi/2]$, then $Y_n$ will be in the range $(0,1]$ and so will $\mu_Y$ and $y_{ave}$ meaning that you can contrive scenarios when $y_{ave}\neq 0$ by choosing $\delta(t)$ appropriately.