Why do probability generating functions have different radius of convergence?
For example if $X \sim Pois(k; \lambda)$, then the probability generating function is P(z) = $\sum_{k=0}^{\infty}\frac{e^\lambda \lambda^k}{k!} z^k$ and using the ratio test we find that radius of convergence $R = \infty$ (i.e. series convergences absolutely for all $z$). Also for the binomial distribution we get $R = \infty$.
By comparison, if $X \sim Geo(k; p)$, then the radius of convergence is $R = \frac{1}{1-p}$.
It seem like such differences with respect to radius of convergence might contain useful information about different "classes" of probability distributions. But I cannot tell if there is some fundamental reason explaining it.
The radius of convergence tells you about the maximal exponential moments the underlying random variable admits, which give you corresponding exponential decay on the tail of that random variable by Markov inequality.