I am studying random processes at the moment and E, the expectation operator comes up a lot. I have a firm understanding of what it does in probability but in the context of auto correlation it just doesn't seem like the same thing.
Say I want to find the expected value of a function X(t), it makes sense that I can find the expected value by summing all the values of X(t) over some time interval and the divide by the length of the time interval.
And similarly if I want to find the expected value of X(t)Y(t) I would multiply them and then integrate over some time period and divide by the length of that period.
Equivalently in probability you can multiply the x by it's pdf and integrate to get the same result.
However the expected value used when calculating the autocorrelation function is just the integral of the two signals multiplied together without the final division by time and without multiplying by a pdf, so in my mind it doesn't seem like it's calculating the expected value it's just correlating them, which I understand is useful but why use the expectation operator to define this when it's not really an expected value, why not just leave it as the integral of the product of the two signals.
Unless I'm missing something or maybe the expectation operator is just defined differently in this are of maths.
Thanks in advance, Richard