I'm struggling to understand a computer science article which uses the approximation, $1-x=e^{-x+O(x^{2})}$. I don't understand where they get this approximation from. To see what they might have done, I tried using:
$$\frac{1}{1-x}=1+x+x^{2}+x^{3}+...$$
And integrating both sides to get
$$-\ln(1-x)=x+\frac{x^{2}}{2}+\frac{x^{3}}{3}+...$$
I guess this must be equal to $x-O(x^{2})$, but I'm not sure how $\frac{x^{2}}{2}+\frac{x^{3}}{3}+...$ can be negative. Is it that $-O(f(x))=O(f(x))$, so it doesn't matter what the sign is? I've only seen big O notation in the context of positive numbers.
From there I guess you would just multiply by $-1$ and exponentiate:
$$1-x=e^{-x+O(x^{2})}$$
We note that
$$ {1 \over 1 - x} = 1 + x + x^2 + \cdots $$ where it is assumed that $|x | < 1$.
We are considering only small values of $x$, otherwise the infinite series does not converge.
Integrating both sides (as RHS is a convergent series), $$ - \ln| 1 - x | = x + {x^2 \over 2} + {x^3 \over 3} + \cdots $$ which yields $$ (1 - x) = e^{- x - {x^2 \over 2} - {x^3 \over 3} + \cdots \} $$
As $x \rightarrow 0$, $x^2$ term nominates the higher powers of $x$.
Hence, using the Big-O notation, $$ 1 - x = e^{-[x + O(x^2)]} $$
I hope this answers your query!