I have a problem, which I do not conceptually understand. I need to approximate an arbitrary function
$$\frac{1}{D-h(x)}$$
where $h(x)$ is arbitrary, $h(x)\ll D$ and $D$ is a constant. Friends say to me that it is just equal to
$$\frac{1}{D-h(x)}\approx \frac{1}{D} \left(1+\frac{h(x)}{D}\right)$$
up to first linear term, which is basically a power series. I know that this would work for specific function where $h(x)=x$ and then the power series expansion would be
$$\frac{1}{D-x}\approx \frac{1}{D} \left(1+\frac{x}{D}\right),$$
but I do not see how we can just replace $x$ by $h(x)$.
Also, it seems that it is not possible to get expansion by using Taylor series if we use $h(x)$. Why power series would work and Taylor series not?
Here it is sufficient to consider the expansion of a geometric power series \begin{align*} \frac{1}{1-x}=1+x+x^2+\cdots\qquad\qquad|x|<1 \end{align*} which is convergent for $|x|<1$.
The only crucial aspect here is the convergence criterion $|x|<1$, which enables us to do a series expansion.
Note, that we do not require analytical properties of $h(x)$ in order to deduce (2). But if $h(x)$ is sufficiently often differentiable at $x=0$ we could also do a Taylor expansion in order to get (2).