Probability-generating function and variance

161 Views Asked by At

Let $X$ be a random variable taking values in $\mathbb{N}_{0}$, with probability-generating function $G(z) = \sum_{x \in \mathbb{N}_{0}} p(x) z^x$. Assume that $p(0) > 0$, $\mathbb{E}[X] = 1$ and $\operatorname{Var}[X]$ is finite. Why do these conditions imply that the integral $\int_{0}^{1} (1-z)(G(z)-z)^{-1} dz$ diverges to $+ \infty$? I do not see the intuition behind this integral, or its relation to the expectation and variance of $X$.

3

There are 3 best solutions below

3
On

HINT

Note that $\mathbb{E}[X] = \sum_x xp(x) = G'(1)$ and the condition that the variance is finite amounts to a finite second moment, in other words $G''(1) = \sum_x x^2 p(x) < \infty$. Can you take it from here?

12
On

Since $G^\prime(1^-)=\mu=1$ and $G^{\prime\prime}(1^-)=\mu^2+\sigma^2-\mu=\sigma^2$,$$G(1-\epsilon)=G(1^-)-G^\prime(1^-)\epsilon+\frac12G^{\prime\prime}(1^-)\epsilon^2+o(\epsilon^2)=1-\epsilon+\frac12\sigma^2\epsilon^2+o(\epsilon^2)$$for $0<\epsilon\ll1$, i.e. $G(z)-z=\frac12\sigma^2(1-z)^2+o((1-z)^2)$ for $z$ slightly less than $1$. So at the upper limit, the integral diverges like $\frac{2}{\sigma^2}\int_c^1\frac{dz}{1-z}$. (By contrast, at the lower limit the integrand approximates $1$.)

0
On

Consider the Taylor series of $G(z)-z$ around $(z-1)$. You'll get that the term in $(z-1)^1$ vanishes, but the term in $(z-1)^2$ does not. This causes the integral to blow up around $z=1$, just like the integral of $\frac{1}{x}$ from $0$ to $1$ diverges.