This is a two part question:
Part 1:
In a (totally fascinating) paper studying the distance between two domains of a protein in an MD simulation, the time-averaged mean square displacement of the distance between the two domains, $R(t)$, is given by:
$$ \overline{\delta^{2}(\Delta;t)} = \frac{1}{t-\Delta} \int_{0}^{t-\Delta} [R(t'+ \Delta)-R(t')]^{2} \ dt'$$
where $\Delta$ is the "lag time" and $t$ is the total observation time of the simulation.
I'm familiar with the normal average value of a function $f(x)$ between $a$ and $b$:
$$ \overline{f(x)}_{(a,b)} = \frac{1}{b-a}\int_{a}^{b} f(x) \ dx, $$
and I understand that $[R(t'+\Delta)-R(t')]^2$ is the (squared) measure of the difference in the distance between the two domains after a certain "lag time" $\Delta$, and that $\overline{\delta^{2}(\Delta;t)}$ is measuring the average of the this measure over a time $(t-\Delta)$.
What I don't understand is this: what information does this convey? What's the purpose of changing the time over which the square displacement is averaged?
For example, in a simulation with total observation time $t=100$ ps, $\overline{\delta^{2}(\Delta;t)} \propto \Delta^{1.5}$, with $\Delta \in (10^{-1}\text{ps},10^1\text{ps})$.
At the lower end, the integral is averaging the distance between the domains after a lag time of $10^{-1}$ps over ~$100$ ps. At the upper end, the integral is averaging the distance between the domains after a lag time of $10$ps, over $90$ps. What information does the relationship between $\Delta$ and $\overline{\delta^2(\Delta;t)}$ convey? The fact that the time-averaged mean square displacement goes up as the lag time goes up means... what exactly?
Part 2:
The same paper defines the normalized auto-correlation function of the distance between the domains as: $ C(\Delta;t) = C'(\Delta;t)/C'(0;t)$, where:
$$ C'(\Delta;t)=\frac{1}{t-\Delta}\int_{0}^{t-\Delta}\delta R(t')\delta R(t' + \Delta) \ dt' $$
where $\delta R(t)=R(t)-\langle R \rangle$; in other words, how far the distance between the domains is from the average inter-domain distance.
Here, I have to admit more ignorance than in Part 1. I understand that the auto-correlation function is supposed to be some measure of the similarity of a function to himself at different times, but I don't understand how this function achieves that measure. I wish I had a more pointed question to ask, but I'm hoping that someone can help anyway. I understand if it's too broad.
If the time averaged msd is increasing in lagtime, this means, that the integral without the factor is growing faster than linear in lagtime. Thus,R(t) and R(t+lag) grow farther apart ratber fast on average, if the lagtime grows. For 2: i'm not completly able to explain it to you, but you could start with simple functions for R and see what happens. Also it looks a lot like the convolution of R with itself. On option would be establishing the relation to the convolution and getting an intuition about that broader concept.