I have been studying the notes from Princeton on the Saddle-point bounds in analytic combinatorics. On page 17, the following main theorem is presented:
Theorem: Let $G(z)$, not a polynomial, be analytic at the origin with finite radius of convergence $R$. If $G$ has nonnegative coefficients, then $[z^n]G(z) \leq G(\zeta) / \zeta^n$, where $\zeta$ is the saddle point closest to the origin, the unique real root of the saddle point equation $\zeta G'(\zeta) / G(\zeta) = N + 1$.
A proof sketch is given by $[z^n]G(z) = \frac{1}{2\pi i} \oint G(z) z^{-n - 1} dz$ then changing to polar coordinates and using the arc length bound etc.
Three pages later they applied it on $G(z) = (1 + z)^{2n}$ to bound $\binom{2n}{n}$. It achieves a bound of $O(4^n)$ (or through constant analysis, $\frac{16}{9} 4^n$), which is only a factor of $n^{1 / 2}$ off.
My question is that (1) why does the theorem require $G(z)$ to not be a polynomial, and (2) why does it work well in the example?
(1) The textbook gives more explanation. If $G$ is a polynomial, then the radius of convergence is infinite. We need a finite radius of convergence to prove the existence of a saddle point for the function $F(z) = G(z)z^{-n-1}$ on the positive real line. Otherwise there may not exist a solution to the saddle point equation.
(2) is a harder question to answer, but the looseness comes from bounding the integrand $G(z) z^{-n-1}$ by $G(\zeta)\zeta^{-n-1}$.