Let $\left\{X_t,t\in T\right\}$ be a stationary process such that $\text{Var}(X_t)<\infty$ for each $t\in T$. The autocovariance function $\gamma_X(\cdot)=\gamma(\cdot)$ of $\left\{X_t\right\}$ is defined to be $$ \gamma(h)=\text{Cov}(X_{h+t},X_t)~\forall h,t\in\mathbb{Z}. $$ Moreover, assume that $EX_t=0$ for each $t\in T$.
There is the following statement:
If $\gamma(0)>0$ and $\gamma(h)\to 0$ as $h\to\infty$, then the covariance matrix $\Gamma_n=[\gamma(i-j)]_{i,j=1,\ldots,n}$ of the column vector $(X_1,\ldots,X_n)'$ is non-singular for every $n$.
The proof of this starts as follows:
Suppose that $\Gamma_n$ is singular for some $n$. Then since $EX_t=0$ there exists an integer $r\geq 1$ and real constants $a_1,\ldots,a_r$ such that $\Gamma_r$ is non-singular and $$ X_{r+1}=\sum_{j=1}^r a_j X_j.~~~~~(*) $$
I do not completely see this argument. In particular, there are two inaccuricies to my opinion.
Here's how I do understand it:
Suppose $\Gamma_n$ is singular. This means, its determinant is zero. So there is at least one column (one row) that is a linear combinations of the other columns (rows). If we delete this column (row), we get $\Gamma_{n-1}$, if its determinant is again zero, we repeat this. Eventually, we get $\Gamma_r$, for some $r\geq 1$, such that all columns (rows) are independent, meaning that $\Gamma_r$ has positive determinant, i.e. is non-singular.
So far so good.
It remains to argue, how to get $(*)$.
In the proof, there is said to look at the following statement that one probably should use for my problem:
If $X=(X_1,\ldots,X_n)'$ is a random vector with covariance matrix $\Sigma$, then $\Sigma$ is singular if and only if there exists a non-zero vector $b=(b_1,\ldots,b_n)'\in\mathbb{R}^n$ such that $\text{Var}(b'X)=0$.
So, I apply this as follows:
As done above, let $\Gamma_r$ be non-singular. Since $r$ was the smallest $r\geq 1$ such that all columns are linear independent, the covariance matrix $\Gamma_{r+1}$ of $X=(X_1,X_2,\ldots,X_{r+1})'$ is singular. By the cited statement, there is some $b=(b_1,b_2,\ldots,b_{r+1})'\in\mathbb{R}^{r+1}$ such that $$ \text{Var}(b'X)=0. $$ But this means that $$ b'X=\sum_{i=1}^{r+1}b_iX_i=E(b'X)=0\text{ almost surely }, $$ implying that $$ b_{r+1}X_{r+1}=-\sum_{i=1}^r b_iX_i. $$ Imho, we can suppose that $b_{r+1}\neq 0$ since if $b_{r+1}=0$, we have that $0=\text{Var}(b_1X_1+b_2X_2+\ldots + b_rX_r+b_{r+1}X_{r+1})=\text{Var}(b_1X_1+b_2X_2+\ldots + b_rX_r)$, meaning by the cited statement that $\Gamma_r$ is singular. But $\Gamma_r$ is non-singular. Hence $$ X_{r+1}=\sum_{i=1}^r a_iX_i,~~~a_i:=-\frac{b_i}{b_{r+1}} $$
or, more precisely, this identity holds almost surely. I don't know why the "almost surely" is omitted in the proof.
Is this okay?
No, it's straightforward from the first statement.
$ \Gamma_n $ is singular $\Rightarrow \exists b_n s.t. \Gamma_nb_n=0 \Rightarrow b_n'\Gamma_nb_n = 0 \iff Var(b_n'X)=0 $ where $X = (X_1,X_2,...,X_n)'$
Since $E(X)=0$, $Var(b_n'X) = E((b_n'X)'(b_n'X))=0 \Rightarrow b_n'X = 0 $ a.s.
And the result follows.