Understanding Quasi-Stationary Processes

117 Views Asked by At

Assume an Ergodic Process.

A random process $s[n]$ (this notation refers to a sequence but is popular in the field of signal processing) is said to be quasi-stationary provided that it satisfies the two conditions:

$1$.It has a bounded, time-varying mean i.e., $\mathbb{E}[s[n]]=m_{s}[n]$, $\forall n$

$2$. It has a bounded autocorrelation i.e., $\mathbb{E}[s[n_{1}]s[n_{2}]]=|R_{s}[n_{1},n_{2}]|$, $\forall n_{1},n_{2}$ that satisfies$$\displaystyle\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}[s[n],s[n-\tau]]=R_{s}[\tau]\tag{1}$$ where $R_{s}[n_1,n_2]$ is the autocorrelation of $s[n]$ over two time instances $n_1$ and $n_2$ and $R_{s}[\tau]$ is the autocorrelation of $s[n]$ computed at the delay time $\tau$.

Question: I am having a hard time understanding $(1)$.

What appears to me is that it is taking all values of $n$ and computing the ensemble average of $s[n]$ with a delayed version of itself for each $n$ over a fixed $\tau$ and then averaging them. So I am not sure if I get this right but its like averaging the ensemble-average. How does all of this equal the autocorrelation of $s[n]$ at a time delay $\tau$?

The autocorrelation according to my knowledge is the resemblance between a signal (sequence) $x[n]$ with a delayed version of itself.