I am reading "Neural Networks and Learning Machines", Third Edition, Simon Haykin and at the beginning I came across at something that perplexes me.
I'll quote a whole page because some contexts may be important and I fail to notice them. But the question I am asking is: is it so that:
$$(1 - w z^{-1})^{-1} = \sum_{l=0}^{\infty} w^l z^{-l}$$ ?
And now the quote (I won't use the quote sign ">" in order not to mess with the equations):
Feedback is said to exist in a dynamic system whenever the output of an element in the system influences in part the input applied to that particular element, thereby giving rise to one or more closed paths for the transmission of signals around the system. [...] Moreover, it plays a major role in the study of a special class of neural networks known as recurrent networks. Figure 12 shows the signal-flow graph of a single-loop feedback system, where the input signal $x_j(n)$, internal signal xj (n), and output signal $y_k(n)$ are functions of the discrete-time variable n. The system is assumed to be linear, consisting of a forward path and a feedback path that are characterized by the “operators” A and B, respectively. In particular, the output of the forward channel determines in part its own output through the feedback channel. From Fig. 12, we readily note the input–output relationships $$y_k(n) = \textbf{A}[x'_j(n)]\qquad(16)$$ and $$x'_j(n) = x_j(n) + \textbf{B}[y_k(n)]\quad(17)$$ where the square brackets are included to emphasize that A and B act as operators. Eliminating $x_j(n)$ between Eqs. (16) and (17), we get $$y_k(n) = \frac{\textbf{A}}{1-\textbf{AB}}[x_j(n)]\qquad(18)$$
We refer to A/(1 - AB) as the closed-loop operator of the system, and to AB as the open-loop operator. In general, the open-loop operator is noncommutative in that BA $\neq$ AB. Consider, for example, the single-loop feedback system shown in Fig. 13a, for which A is a fixed weight w and B is a unit-delay operator $z^{-1}$, whose output is delayed with respect to the input by one time unit.We may then express the closed-loop operator of the system as
$$\frac{\textbf{A}}{1-\textbf{AB}} = \frac{w}{1-w z^{-1}}=w(1 - w z^{-1})^{-1}$$
Using the binomial expansion for $(1 - w z^{-1})^{-1}$, we may rewrite the closed-loop operator of the system as
$$\frac{\textbf{A}}{1-\textbf{AB}} = w\sum_{l=0}^{\infty} w^l z^{-l}\qquad (19)$$
I'm not sure what it has to do with the binomial theorem or what is a binomial expansion but it looks like a simple telescoping sum:
$$(1-x)(1+x+x^2+\cdots) = (1+x +x^2 + \cdots) - (x+x^2+x^3+ \cdots) = 1$$
hence $(1-x)^{-1} = \sum_{n\geq 0} x^n$. For $x = wz^{-1}$ you get the expression from the book (assuming $w$ and $z$ are scalars so that $(wz^{-1})^n = w^nz^{-n}$).