Say I have a series $x_1, ...., x_N$ independently drawn from $Normal(0,1)$
Say I create sub-samples
$y_1 = x_1+..x_n$,
$y_2 = x_2+..x_{n+1}$,
$y_m = x_{N-n}+..x_N$,
I was expecting $std(\{y\}) = \sqrt(n) std(\{x\})$
where $std(\{y\})$ is the standard deviation of the set $y_1,...,y_m$. where $std(\{x\})$ is the standard deviation of the set $x_1,...,x_N$.
But this is not the case. Because of overlap sampling $std(\{y\}) < \sqrt(n) std(\{x\})$
If I performed non-overlapped sampling like
$z_1 = x_1+..x_n$,
$z_2 = x_{n+1}+..x_{2n}$,
$z_p = x_{pn+1}+..x_N$,
then I would get $std(\{z\}) = \sqrt(n) std(\{x\})$
here $std(\{z\})$ is the standard deviation of the set $z_1,...,z_p$.
My question: is there a theoritical estimate of the amount of bias for normal distribution ?
Thanks in advance