I'm reading a book on time series analysis and I came across with a derivation I cannot understand. The following picture is a part from my book and I have highlighted with red color the part I'm confused with. I have used other colors to highlight some other points in the picture. I tried to include as much information as possible so it would help answering:
Example $\textbf{A.2}$ Time Invariant Linear Filter
As an important example of the use of the Riesz-Fisher Theorem and the properties of mean square convergent series given in $\text{(A.5)-(A.7)}$, a time-invariant linear filter is defined as a convolution of the form $$y_t=\sum_{j=-\infty}^\infty a_jx_{t-j}\tag{A.9}$$ for each $t=0,\pm1,\pm2,\ldots,$ where $x_t$ is a weakly stationary input series with mean $\mu_x$ and autocovariance function $\gamma_x(h)$, and $a_j$, for $j=0,\pm1,\pm2,\ldots$ are constants satisfying $$\color{#FF7F27}{\boxed{\color{black}{\displaystyle\,\,\sum_{j=-\infty}^\infty |a_j|<\infty.}}}\tag{A.10}$$ The output series $y_t$ defines a filtering or smoothing of the input series that changes the character of the time series in a predictable way. We need to know the conditions under which the outputs $y_t$ in $\text{(A.9)}$ and the linear process $\text{(1.29)}$ exist.
Considering the sequence $$y_t^n=\sum_{j=-n}^na_jx_{t-j},\tag{A.11}$$ $n=1,2,\ldots,$ we need to show first that $y_t^n$ has a mean square limit. By Theorem A.$1$, it is enough to show that $$E|y_t^n-y_t^m|^2\to0$$ as $m$,$n\to\infty$. For $n>m>0$, $$\begin{align}E|y_t^n-y_t^m|^2&= E\,\left|\sum_{m<|j|\le n}a_jx_{t-j}\right|^2\\ &=\sum_{m<|j|\le n}\sum_{m\le|k|\le n}a_ja_kE(x_{t-j}x_{t-k})\\ &\le\sum_{m<|j|\le n}\sum_{m\le|k|\le n}|a_j||a_k||E(x_{t-j}x_{t-k})|\\ &\le\sum_{m<|j|\le n}\sum_{m\le|k|\le n}|a_j||a_k|(E|x_{t-j}|^2)^{1/2}(E|x_{t-k}|^2)^{1/2}\\ &\color{#ED1C24}{\boxed{\color{black}{=\color{#FFF200}{\boxed{\color{black}{\gamma_x(0)}}}\,\color{#FF7F27}{\boxed{\color{black}{\displaystyle\left(\sum_{m\le|j|\le n}|a_j|\right)^2}}}\to 0}}}\color{#ED1C24}{\boldsymbol{\longleftarrow} \text{Why does this part converge to zero?}} \end{align}$$ as $m$,$n\to\infty$, because $\gamma_x(0)$ $\color{#FFF200}{\underline{\color{black}{\text{is a constant}}}}$ and $\{a_j\}$ $\color{#FF7F27}{\underline{\color{black}{\text{is absolutely summable}}}}$ (the second inequality follows from the Cauchy-Schwarz inequality).
Although we know that the sequence $\{y_t^n\}$ given by $\text{(A.11)}$ converges in mean square, we have not established its mean square limit. It should be obvious, however, that $y_t^n\to^{ms}y_t$ as $n\to\infty$, where $y_t$ is given by $\text{(A.9)}$.$^1$
$\text{_________}$
$^1$ If $S$ denotes the mean square limit of $y_t^n$, then using Fatou's Lemma, $E|S-y_t|^2=E\lim \inf_{n\to\infty}|S-y_t^n|^2\le\lim\inf_{n\to\infty}E|S-y_t^n|^2=0$, which establishes that $y_t$ is the mean square limit of $y_t^n$.
My question is: Why does the red part converge to zero? We know that $\gamma_x(0)$ is a constant and that $\{a_j\}$ is absolutely summable, but couldn't it be that the sum $\left(\sum_{m\leq|j|\leq n}|a_j|\right) \geq 1$? In this case the red part wouldn't converge to zero right?
It is absolutely summable, but it doesn't have to be $< 1$ right? If the sum $\left(\sum_{m\leq|j|\leq n}|a_j|\right)$ would be less than $1$ or $\gamma_x(0)$ would equal $0$, I could understand this. What obvious am I missing? =)
Thank you for any help!
If $a_j$ are absolutely summable, then
$$\lim_{j\rightarrow\infty}|a_j|=0$$
must be satisfied. And this implies that
$$\lim_{m,n\rightarrow\infty}\left ( \sum_{m\le|j|\le n}|a_j|\right )^2=0$$