I am running a simulation for a random walk using steps lengths from numpy random.normal. I understand that I should get a mean squared displacement of $\sigma (t)^2$ when I use
xp1 += random.normal(loc = 0.0,scale = sigma)
which I do.
What I don't understand is why I get a MSD of $\sigma (t)^2$ when I use
xp2 += sigma*random.normal(loc = 0.0,scale = 1.0)
Here is the full code for my simulation
import numpy as np
import matplotlib as plt
dt = 0.001 #length of time step
tf = 10.0 #time to run simulation
tmax = int(tf/dt) #number of steps to run
sigma = np.sqrt(2*dt) #standard deviation of random walk
run_n = 1000 #number of runs
xp1s = np.zeros(tmax) #x values of sigma*N(0.0,1.0)
xp2s = np.zeros(tmax) #x values of N(0.0,sigma)
for run in range(run_n):
#how much the particle moves at each point in time
xp1 = np.random.normal(0.0,sigma, size = tmax)
xp2 = sigma*np.random.normal(0.0,1.0, size = tmax)
#position at each time is the sum of steps before it
xp1tmp = np.cumsum(xp1)
xp2tmp = np.cumsum(xp2)
#get the MSD for each point in time
xp1s += xp1tmp**2 /run_n
xp2s += xp2tmp**2 /run_n
plt.plot(linspace(0,tf,tmax),xp1s)
plt.plot(linspace(0,tf,tmax),xp2s)
$N(0,\sigma)$ and $\sigma N(0,1)$ are two very different equation. I even computed $\int_{-\infty}^{\infty} x^2 N(0,\sigma) dx$ and $\int_{-\infty}^{\infty} \sigma x^2 N(0,1) dx$ and got different answers.
I don't understand why the two MSEs should be the same
