Consider $\lim_{x\rightarrow\infty}(1+\frac1x)^{x}=e$. Using standard calculus techniques, this limit can be evaluated, however, approximating it directly with numerical code can be difficult depending on machine precision. For example, here is a graph from octave:

Near the right edge of the graph, before machine precision kills the computation completely, we get $\left(1+\frac{1}{{9(10)^{15}}}\right)^{9(10)^{15}}=7.37725371726828$ which is clearly incorrect/inexact. This stems from the approximation $1+\frac{1}{{9(10)^{15}}}=\mathtt{3ff0000000000001}_{16}=1+2^{-52}$ (one plus machine epsilon), again which is obviously inexact.
Is there a way to get an approximate value of $e$ out of this formula numerically, say to more than 8 or so decimal places (in double precision float)? The accuracy peaks around $x=10^8$ with $(1+1/x)^x-e\approx -3(10)^{-8}$. Are there general numerical techniques for evaluating this or similar limits where large and small numbers are combined?
Yes. Richardson's techniques can be applied and we can almost recover the douple precision representation of $e$.
Let $$\phi(n) = (1 + 1/n)^n$$ and compute $$a_k = \phi(2^k), \quad k=1,2,3,\dotsc.$$ Here are the first 30 values of $a_k$ and some auxiliary numbers.
The second column contains $a_k$, i.e. the approximation of $e$. The third column contains Richardson's fraction, i.e., $$f_k = \frac{a_{k-1} - a_{k-2}}{a_{k}-a_{k-1}}$$ In exact arithmetic it would converge to $2$ from below. The convergence would be monotonic for sufficient large $k$ with $2-f_k = O(2^{-k})$. This pattern is observed until $k=26$ and it is clearly broken for $k=27$. The fourth column contains Richardson's error estimate, i.e, $$ e_k = a_k - a_{k-1} $$ We can improve the accuracy of our approximation $a_k$ by adding the error estimate $e_k$, i.e., $$ b_k = a_{k+1} + e_{k+1}, \quad k=1,2,3,\dotsc $$ This gives us the numbers
In exact arithmetic, Richardson's fraction would converge toward $4$ from below and monotonically so for sufficient large $k$. This pattern is observed for the computed values until $k=17$. It is broken at $k=18$. This does not imply that the error estimates, i.e., $$ e_k' = \frac{b_k - b_{k-1}}{3}$$ are unreliable for $k>17$, but the accuracy of the error estimates suffers as $k$ is increased.
What can we do? We are lucky, and $$ b_{17} + e_{17}' \approx 2.718281828459045$$ differs from the floating point representation of $$e \approx 2.718281828459046$$ by one unit in the last place. It is worth noting that these numbers required $a_k = \phi(2^k)$ for $k \leq 18$.
Underlying the use of Richardson's techniques is the existence of an asymptotic error expansion of the form $$ e - \phi(n) = c_1 n^{-1} + c_2 n^{-2} + \dotsc.$$ The properties of Richardson's fractions and estimates flows directly from this expansion. When adding Richardson's error estimate to the current approximation you typically reduce the error, but you lose control, unless you compute a new error estimate. You gradually lose the ability to do this as the order of the approxmation increases and you run out of bits.