How should one interpret the $n$-fold iteration (i.e. composition) $g(g( \cdots g(s))))$ of a probability generating function $g(s)$?
I'm looking at a proof that seems to suggest that if we have $m$ i.i.d. random variables $X_1, \ldots, X_m$ that are distributed like $G$, where $g(s)$ is the probability generating function of $G$, then the distribution $X_1 + \cdots + X_m$ has the generating function $g^m(s)$. I can't seem to prove it without trying to brute force the calculation for the composition and showing it is what we expect (is there a more elegant way to see this?) – why is this true?
The PGF of the sum of $X_1,\ldots,X_n$ independent but not necessarily identically distributed is given by $E[s^{\sum_{i=1}^n X_i}] = E[ s^{X_1} \ldots s^{X_n}] = E[s^{X_1}] \ldots E[s^{X_n}] = g_1(s) \ldots g_n(s)$ where $g_i$ is the PGF of $X_i$. If they are i.i.d. then you have $(g(s))^n$ as desired.
As for a case where you get a composition of PGF's, see Mark's link in the comments -- its a property of the structure of that problem.