Values of $\left (1+\frac{1}{n}\right )^n$ on a calculator

169 Views Asked by At

I was reading a calculus book, and at some point the following standard limit was mentioned: $$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e$$ Afterwards, the author of this book invited the reader to try and compute, using a calculator, the value of the expression $$\left(1+\frac1n\right)^n$$ for various inputs. He then stated that the outputs would look something like this: $$\begin{array}{c|c} n&(1+1/n)^n\\ \hline 10^3&2.7169239\\ 10^4&2.7181459\\ \dots&\dots\\ 10^{11}&1\\ 10^{12}&1 \end{array}$$ The author then invites the reader to try and work out why this happens.

I thought this was an interesting question and one that actually shows how calculus can make concepts work that wouldn't normally be quantifiable or measurable in a physical world.

My theory is that, since any calculator has a certain precision that it can store values up to, after a certain point the expression $1/n$ just gets rounded down to $0$, aren't there being enough bits to store the floating point value.

The whole expression is then evaluated as $1^n$ which is constantly equal to $1$. It doesn't matter if eventually the calculator can't keep up with the value of the exponent either and rounds it too up to something else, because the value of the power will be $1$ regardless of what the exponent ends up being.

Any other theories?

4

There are 4 best solutions below

0
On BEST ANSWER

Not exactly.

$\dfrac1n$ can be represented in floating-point for huge values of $n$; $\dfrac1n=10^{-12}$ is not at all a problem. This is because the float uses an exponent, so that its order of magnitude can vary wildly.

The trouble comes from the addition of $1$, which is comparatively large, and the sum is one followed by many zeroes (like $\color{green}{1.0000000000}\color{red}{01}$). As the number of significant digits in the representation is limited, sooner or later only the zeroes remain. Even before that, the accuracy that you get is poor because many significant figures are dropped in the binary representation.


Note that several programming languages have a special function to deal with this particular case of loss of accuracy, usually denoted $\text{log1p}$. It computes the value of

$$\log(x+1)$$ accurately for small $x$, i.e. without suffering from the rounding problem. You would use it as follows:

$$\left(1+\frac1n\right)^n=e^{n\log(1+1/n)}=e^{n\text{ log1p}\left(1/n\right)}.$$

Note that by Taylor

$$n\text{ log1p}\left(\frac1n\right)=1-\frac1{2n}+\frac1{3n^2}-\cdots$$

so that for all large $n$ we will obtain $e^1$ (even when there is rounding in the computation of $1-\dfrac1{2n}$). Compare the values with $\log$ and $\text{log1p}$:

$$\begin{align} 2.5937424601000023 &,\ \ \ \ \ 2.5937424601 \\ 2.7048138294215285 &,\ \ \ \ \ 2.7048138294215263 \\ 2.7169239322355936 &,\ \ \ \ \ 2.7169239322358925 \\ 2.7181459268249255 &,\ \ \ \ \ 2.718145926825225 \\ 2.7182682371922975 &,\ \ \ \ \ 2.7182682371744895 \\ 2.7182804690957534 &,\ \ \ \ \ 2.7182804693193767 \\ 2.7182816941320818 &,\ \ \ \ \ 2.718281692544966 \\ 2.7182817983473577 &,\ \ \ \ \ 2.7182818148676366 \\ 2.7182820520115603 &,\ \ \ \ \ 2.7182818270999043 \\ 2.7182820532347876 &,\ \ \ \ \ 2.7182818283231316 \\ 2.7182820533571100 &,\ \ \ \ \ 2.7182818284454537 \\ 2.7185234960372378 &,\ \ \ \ \ 2.718281828457686 \\ 2.7161100340869010 &,\ \ \ \ \ 2.718281828458909 \\ 2.7161100340870230 &,\ \ \ \ \ 2.7182818284590318 \\ 3.0350352065492620 &,\ \ \ \ \ 2.718281828459044 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \\ 1.0 &,\ \ \ \ \ 2.718281828459045 \end{align}$$


Final comment:

The numerical computation of limits (virtually the core of all numerical computations) suffers from two limitations:

  • you cannot take $n$ to be as large as you want, because of the workload involved; this is called the truncation error;

  • you have to limit the computation accuracy for practical reasons.

Usually, the design of a numerical solution is a tradeoff between both.

0
On

Your theory is the only one that actually happens in the real world, where all machines have finite precision and cannot represent numbers sufficiently close to $1$, like $1+1/n$. There is no other plausible explanation for why the computed expression suddenly drops to $1$.

1
On

Addendum to Parcly Taxel's answer. Observe, in the figures below, what happens when we try to compute $(1+1/n)^n$, when the base, $(1+1/n)$, is rounded. The black line below shows the true value of $(1+1/n)^n$, while the orange, green and red curves show what happens when $(1+1/n)$ is truncated to $1$, $2$, and $3$ decimals respectively. For small $n$, the curves are all approximately equal, but as $n$ grows, the errors accrue, until the rounded curve become identically $1$. This occurs at $n=\underbrace{100\ldots01}_{(a-1)\ \text{zeros}}$, as, from that point onwards, the granularity of the rounding means there is no differentiation between $(1+1/n)$ and $1$.

enter image description here

The red curve also eventually becomes constant.

enter image description here

We can find the curves by expressing the decimal expansion of any number $x$, truncated to $a$ decimals, with the floor function as $\dfrac{\left\lfloor 10^{a}x\right\rfloor}{10^{a}}$. The temporarily upward 'steps' that we see in the rounded curves are a tangential observation but are due to the fact that as $n$ increases, and $(1+1/n)$ decreases, it runs past rounded decimals, e.g., $\left(\ldots,\ 1.05,\ 1.04,\ 1.03,\ 1.02,\ \ldots\right)$. As $(1+1/n)$ approaches one of these, its rounded approximation becomes better, but then suddenly becomes worse, when it has passed the decimal and is approaching the next one. However, when $(1+1/n)$ passes $1.01$, there is no improvement as there is no number between that and $1$.

0
On

Correct. It is from a lack of precision in the machine.

In particular, when you go to evaluate this expression on a calculator, what you'll end up doing, almost surely, is first computing the inside part, i.e.

$$\left(1 + \frac{1}{n}\right)$$

and then raising it to the $n$th power. And that's where this goes bad. The machine has a certain, fixed number of digits - actually, binary bits - that it has (simplifying a bit) allotted that it can use to describe the fractional part of a real number. In particular, if it has $b$ fractional bits, then it cannot distinguish real numbers that are closer than $2^{-b}$ from each other. Hence if $\frac{1}{n} < 2^{-b}$, i.e. $n > 2^b$, then the calculator will not be able to see the difference between

$$\left(1 + \frac{1}{n}\right)$$

and

$$1$$

and so it will "think" you are exponentiating $1$. And $1^n = 1$, no matter what $n$ is.