How can a scaling factor determine positive semi-definiteness?

67 Views Asked by At

in time series analysis the sample autocovariance function is scaled by $\frac{1}{n}$ instead of $\frac{1}{n-h}$. More specifically the covariance matrix $\hat{\gamma}$ of a stationary series is defined $$ \widehat{\gamma}(s, t) = \widehat{\gamma}(|s-t|) = \widehat{\gamma}(h) = n^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x}) $$ where $h = |s-t|$. In all textbooks and in my lectures it is simply mentioned that this instead of $\widehat{\gamma}(h) = (n-h)^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x})$ prevents the matrix $\hat{\gamma}$ from being not positive semidefinite.

I originally thought a scaling factor can not influence it but there even is an example over at stats.stackexchange. Unfortunately, after playing around with it and seeing the reasoning (I agree) I still get the wrong answer.

Could someone please shed some light on how the scaling factor influences positive definiteness and in some way (see example) even invertibility?

Many thanks in advance, it isn't important for my course but it I can't concentrate on other things until I understood this.

1

There are 1 best solutions below

4
On BEST ANSWER

This is not a scaling factor, as $h$ is different for different matrix entries. You’re right that a single positive scaling factor applied to all matrix elements can’t affect positive (semi-)definiteness.