Say I have a function which exists within the unit disk, say $$f(x)=a_0+a_1x+a_2x^2+...$$ If we know sufficient information about the coefficients, say we know the growth rate of $\sum\limits_{k=0}^{n}a_k$ or something similar, can we describe the growth rate of $f$ near 1? Lets give some examples. $$1+x+x^2+...\approx \frac{1}{1-x}, x\to1^-$$ $$1+3x+4x^2+5x^3+...=\frac{1}{1-x}+\frac{1}{(1-x)^2}\approx \frac{1}{(1-x)^2},x\to1^-$$ Even less simple functions such as $$\zeta(s)\approx\frac{1}{1-s}, s\to1^+$$ Etc. But what about less $elementary$ functions. What about, say $$f(x)=x+x^2+x^4+x^8+x^{16}+x^{32}+...$$ Or $$f(x)=x^2+x^3+x^5+x^7+x^{11}++x^{13}+...$$ How can we estimate the growth rate of these noble savages? I am aware that certain $nice$ functions can be expanded in Laurent series' around 1, like the first two given. But what makes you think Mathematics cares to be nice?
The purpose of the excursion is to investigate the relationship between the growth rate of $f(n)=\sum\limits_{k=0}^{n}a_k$ and that of $f(x)=a_0+a_1x+a_2x^2+...$. Once this is done I will look at the growth of $f(s)=a_1/1^s a_2/2^s+a_3/3^s+...$ near 1.
Edit:
Essentially what I am asking for is a link between $f(n)=\sum\limits_{k=0}^{n}a_k$ and $f(x)=a_0+a_1x+a_2x^2+...$. If $f(s)$ was a Dirichlet series, such a link would be Perrons formula; $$\sum\limits_{k=0}^{n}a_k=\frac{1}{2\pi i}\int\limits_{c-i\infty}^{c+i\infty}f(z)\frac{n^z}{z}dz$$
If the coefficients of the series are positive and can be described by a nice formula then you can get pretty far by comparing the series to an integral.
For example, if the terms of the series are eventually strictly decreasing and the series has radius of convergence $1$ with a singularity at $x=1$ then
$$ \sum_{n=0}^{\infty} a_n x^n \sim \int_0^\infty a_n x^n \,dn $$
as $x \to 1^-$. This can be proved using the idea behind the integral test for convergence.
Applying this to the first two series yields
$$ \sum_{n=0}^{\infty} x^n \sim \int_0^\infty x^n\,dn = -\frac{1}{\log x} \sim \frac{1}{1-x} \tag{1} $$
and
$$ \sum_{n=0}^{\infty} (n+2)x^n \sim \int_0^\infty (n+2)x^n\,dn = \frac{1}{(\log x)^2} - \frac{2}{\log x} \sim \frac{1}{(1-x)^2} \tag{2} $$
as $x \to 1^-$. Note that in both cases we used the fact that
$$ \log x = x-1 + O\left((x-1)^2\right) $$
as $x \to 1$.
A similar argument leads to the asymptotic
$$ \sum_{n=1}^{\infty} \frac{1}{n^s} \sim \int_1^\infty \frac{dn}{n^s} = \frac{1}{s-1} \tag{3} $$
as $s \to 1^+$.
Sometimes the resulting integral can't be done in closed form but we can still obtain an asymptotic after some additional analysis. To address another of your examples let's study the estimate
$$ \sum_{n=0}^{\infty} x^{b^n} \sim \int_0^\infty x^{b^n}\,dn = \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn \tag{4} $$
where $b > 1$ is fixed. Making the change of variables $(-\log x) b^n = t$ yields
$$ \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn = \frac{1}{\log b} \int_{-\log x}^\infty e^{-t}t^{-1}\,dt. \tag{5} $$
The integral blows up as $-\log x$ approaches zero. For $t \approx 0$ the integrand is
$$ e^{-t} t^{-1} \approx t^{-1}, $$
so we expect that the integral has a logarithmic singularity here. We'll proceed by pulling out this term from the integral:
$$ \begin{align} &\int_{-\log x}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = \int_{-\log x}^1 e^{-t}t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = \int_{-\log x}^1 t^{-1}\,dt + \int_{-\log x}^1 \left(e^{-t}-1\right)t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = -\log(-\log x) + \int_{-\log x}^1 \left(e^{-t}-1\right)t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt. \end{align} $$
The first integral in the last expression converges as $-\log x \to 0$, so the only unbounded term is the first. Thus
$$ \int_{-\log x}^\infty e^{-t}t^{-1}\,dt \sim -\log(-\log x) $$
as $x \to 1^-$. By combining this with $(5)$ we get
$$ \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn \sim -\log_b(-\log x) $$
and so, returning to the original sum through $(4)$ and once again using the asymptotic $\log x \sim x-1$, we have arrived at the conclusion that
$$ \sum_{n=0}^{\infty} x^{b^n} \sim -\log_b(1-x) \tag{6} $$
as $x \to 1^-$.
What follows has been added in response to the comments below.
The series $\sum_p x^p$, where $p$ ranges over the prime numbers, is more tricky to deal with. If we call the $n^\text{th}$ prime $p_n$ then it is known that
$$ p_n \sim n\log n $$
as $n \to \infty$. If we knew ahead of time that
$$ \sum_{n=1}^{\infty} x^{p_n} \sim \sum_{n=1}^{\infty} x^{n\log n} \tag{7} $$
as $x \to 1^-$ then we could directly obtain an asymptotic equivalent for $\sum_p x^p$ by studying the behavior of the integral $\int_1^\infty x^{n\log n}\,dn$. Unfortunately I don't know how to prove $(7)$ directly. I've actually asked a question about the topic here. We can, however, proceed by using the idea presented in an answer to that posted question.
(Interestingly the equivalence $(7)$ will be a corollary of our calculations. Combine $(8)$ with $\lambda = 1$ with $(10)$.)
First, by comparing the series with the corresponding integral it's possible to show that, for $\lambda > 0$ fixed,
$$ \sum_{n=1}^{\infty} x^{\lambda n \log n} \sim \frac{1}{\lambda(x-1)\log(1-x)} \tag{8} $$
as $x \to 1^-$.
Fix $0 < \epsilon < 1$ and choose $N \in \mathbb N$ such that
$$ \left|\frac{p_n}{n\log n} - 1\right| < \epsilon $$
for all $n \geq N$. For $0 < x < 1$ we have
$$ \sum_{n=N}^{\infty} x^{(1+\epsilon)n\log n} < \sum_{n=N}^{\infty} x^{p_n} < \sum_{n=N}^{\infty} x^{(1-\epsilon)n\log n}. $$
By completing the three series we see that the above inequality is equivalent to
$$ \begin{align} &\sum_{n=1}^{\infty} x^{(1+\epsilon)n\log n} + \sum_{n=1}^{N} \left(x^{p_n} - x^{(1+\epsilon)n\log n}\right) \\ &\qquad < \sum_{n=1}^{\infty} x^{p_n} \\ &\qquad < \sum_{n=1}^{\infty} x^{(1-\epsilon)n\log n} + \sum_{n=1}^{N} \left(x^{p_n} - x^{(1-\epsilon)n\log n}\right). \end{align} \tag{9} $$
Note that the two error sums are each bounded independently of $x$:
$$ \left|\sum_{n=1}^{N} \left(x^{p_n} - x^{(1 \pm \epsilon)n\log n}\right)\right| \leq 2N. $$
Now, multiply $(9)$ by $(x-1)\log(1-x)$. Taking the limits infimum and supremum as $x \to 1^-$ and using $(8)$ yields
$$ \begin{align} \frac{1}{1+\epsilon} &\leq \liminf_{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} \\ &\leq \limsup _{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} \\ &\leq \frac{1}{1-\epsilon}. \end{align} $$
This is true for all $0 < \epsilon < 1$, so by allowing $\epsilon \to 0$ we obtain
$$ \lim_{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} = 1. $$
Thus, changing the notation of the sum back to $\sum_p x^p$,
$$ \sum_p x^p \sim \frac{1}{(x-1)\log(1-x)} \tag{10} $$
as $x \to 1^-$, which is what we wanted to show.