A very simple python code shows that the correlation coefficient and spearmann rank between two datasets of uniform random numbers drops proportionally to the square root of the number of points in the dataset. However, when one compares Markov chains produced by accumulating those numbers, there is no such behaviour, and even for large data sets, correlation can easily be like 0.7. What is the origin of such behaviour? Are two random markov chains more "related" to each other than two random datasets? Can this effect be corrected?
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
x = np.random.normal(0, 1, 10000)
y = np.random.normal(0, 1, 10000)
# Convert random data into Markov chains
for j in range(1, 10000):
x[j] += x[j-1]
y[j] += y[j-1]
print("Correlation: ", np.abs(np.corrcoef(x,y)[0, 1]))
print("SpearmannRank: ", np.abs(scipy.stats.spearmanr(x,y)[0]))
plt.figure()
plt.plot(x)
plt.plot(y)
plt.show()


The problem is that your Markov chain (a pure accumulator) is not stationary, the variance increases so much that it's useless to trying to estimate a correlation coefficient by averaging (its variance, even after divided by $n$ , does not tend to zero).
You can check this by adding a small "forgetting" factor. Denoting by $X[n]$ the Markov chain and by $x[n]$ the independent process (white noise), you' have $X[n] = a X[n-1] + x[n]$. By taking $0<a <1$, $X[n]$ results (asymptotically) stationary.
You can check that by taking, say $a=0.99$ you already get rid of the problem and the (estimator of the) correlation coefficient is practically zero.
This is explained in more detail here
BTW: you speak of "uniform random numbers" but then you use a normal distribution. That's not essential though (though it's better to use zero mean variables).