I have large number of random vectors that each represent one independent realisation of signal (ordered samples at evenly-spaced points along a line). I then simulate a physical measurement process that operates on these realisations. The effect of the measurement process is to apply a (discrete) Gaussian filter of width sigma along each signal. I then compute the (auto-)covariance matrix K_XX over the set of filtered random vectors. My aim is to understand the effect on K_XX of varying sigma.
So my question is, rather than recomputing K_XX for a large number of values of sigma and analysing the behaviour, can I simply do the filtering and K_XX computation in the reverse order? In other words, filter each row / column of K_XX by a Gaussian filter. For the sake of argument, I'm happy to ignore edge effects.
I tried to justify this in the following way. I read that auto-covariance is the Fourier Transform (FT) of the Power Spectral Density (PSD). So I hope it is correct to say that the Fourier-equivalent of the auto-covariance operation is mod-squared (to get PSD from Fourier coefficients) and the equivalent operation for the Gaussian filter (convolution) is multiplication by Gaussian (by Convolution Theorem and the fact that FT(Gaussian)=Gaussian). By considering this in Fourier-space, is it correct to say that multiplication-by-Gaussian, followed by mod-sq, is equivalent to mod-sq followed by multiplication-by-Gaussian, except for a factor of 2 in the exponent?
I tried Google searching for some information on this but couldn't find anything directly relevant. Perhaps someone has a link or can comment, even if it's to point out flaws in my approach?
One further question relates to computation of the cross-covariance matrix K_XY of two sets of random vectors, where each is derived from original set but filtered by Gaussians of different widths sigma_X, sigma_Y. Would this be the same as smoothing the cross-covariance matrix by an asymmetric 2d Gaussian, with widths sigma_X, sigma_Y?
Many thanks.