Understanding the order-n correlation function algorithm

134 Views Asked by At

I am referring to the algorithm from Chapter 4, section 4.4.2 of Understanding Molecular Simulations: From Algorithms to Applications by Frenkel and Smit. This algorithm is simply called the order-n correlation function algorithm.

From what I can gather, this algorithm adaptively samples velocities of particles in an molecular dynamics simulation to evaluate correlation functions, such as the velocity autocorrelation function at reduced cost.

Let us denote the time interval between successive measurements of velocities of particles by $\Delta t$. Let's define block sum of the velocity of a given particle as: $$v^{(i)}(j\Delta t) = \sum_{l=(j-1)n+1}^{jn} v^{(i-1)}(l\Delta t)$$

It is easy to see that $v^{(0)}(j\Delta t) = v(j\Delta t)$, where $v(j\Delta t)$ is the velocity of a particle at time $t=j\Delta t$.

Doing some arithmetic, we can observe and inductively prove that $$v^{(i)}(j\Delta t) = \sum_{l=(j-1)n^i}^{jn^i} v(l\Delta t)$$

Considering the above statement, we can approximately say that $$v^{(i)}(j\Delta t) \approx \frac{1}{\Delta t} \int_{(j-1)n^i+1}^{jn^i} v(l\Delta t) \mathrm{d}l = \frac{r(n^ij\Delta t) - r\left((n^i(j-1)+1)\Delta t\right)}{\Delta t} $$

So we can say that the this block sum is related to the position of the particle at a given point in time. The coarse graining of the velocities is well-represented in the following figure: enter image description here

All this is well and good, but then they say that "it is straightforward to compute the velocity autocorrelation function with a resolution that decreases with increasing time."

My question is, how so? $$VACF \equiv \psi (t_d) = \langle v(t+t_d) \cdot v(t) \rangle$$

How does what we just did above with block sums help us calculate dot products of velocities with a time delay faster? How would the pseudocode of something like this look like?

He later claims that the total storage for a simulation of length $t=n^i\Delta t$ is $ni$ instead of $n^i$ - which I sort of understand, because we are somehow reducing the number of samples we are taking to evaluate these coefficients. But I am still not sure how this method of blocking is actually helping.