I have already asked this question in Physics, but as someone pointed out it is more appropriate to place it here.
Say I've obtained a series of 3D vectors, (x, y, z), with a different vector at different points in time. I want to obtain the second moment of this set of vector's fluctuations in time. This is given by:
$<$ M $^2> - <$ M $>^2$,
where M is the vector in 3D, and the average is over time. To me, you can solve this simply by taking:
$<$ M $^2> - <$ M $>^2 = <$ (x, y, z ) $ \cdot $ (x, y, z ) $> - <$ (x, y, z ) $> \cdot <$ (x, y, z ) $>$.
However, I have seen this solved by decomposing the vector along different Cartesian axis:
$<$ M $^2> - <$ M $>^2 = < $ M $_x^2 > - < $ M $_x >^2 + < $ M $_y^2 > - < $ M $_y >^2 + < $ M $_z^2 > - < $ M $_z >^2 $.
This is shown on page 2548 in Pitera et al., Biophysical Journal, 2001, 20, 2546–2555. Paper Link.
I just wanted some clarification if this second method is correct, and I guess whether the first is also. To me, the second method is not possible as you cannot decompose $<$ M $>$ as they have. Less so if you then square the decomposed parts.
Thank you for any help, and apologies for any grievances about the formatting.
The average of the square is the average of the square of the components by linearity of expectations: $$ \langle\mathbf M^2\rangle=\langle M_x^2+M_y^2+M_z^2\rangle=\langle M_x^2\rangle+\langle M_y^2\rangle+\langle M_z^2\rangle $$
The square of the average can be decomposed into components, and no cross terms will survive, since we're evaluating an inner product in an orthonormal basis. (In a non-orthonormal bases, this decomposition would not be valid.) $$ \langle\mathbf M\rangle^2=\langle M_x\hat x+M_y\hat y+M_z\hat z\rangle^2=\left(\langle M_x\rangle \hat x+\langle M_y\rangle \hat y+\langle M_z\rangle \hat z\right)^2=\langle M_x\rangle^2+\langle M_y\rangle^2+\langle M_z\rangle^2 $$ Does this clear up the logic behind the decomposition?