Intuitive explanation for why $\left(1-\frac1n\right)^n \to \frac1e$

4.2k Views Asked by At

I am aware that $e$, the base of natural logarithms, can be defined as:

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Recently, I found out that

$$\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n = e^{-1}$$

How does that work? Surely the minus sign makes no difference, as when $n$ is large, $\frac{1}{n}$ is very small?

I'm not asking for just any rigorous method of proving this. I've been told one: as $n$ goes to infinity, $\left(1+\frac{1}{n}\right)^n\left(1-\frac{1}{n}\right)^n = 1$, so the latter limit must be the reciprocal of $e$. However, I still don't understand why changing such a tiny component of the limit changes the output so drastically. Does anyone have a remotely intuitive explanation of this concept?

15

There are 15 best solutions below

4
On BEST ANSWER

Perhaps think about the binomial expansions of $\left(1 + \frac{1}{n}\right)^n$ and $\left(1 - \frac{1}{n}\right)^n$. The first two terms are $1 + n \frac{1}{n}$ and $1 - n \frac{1}{n}$ respectively. And after that the terms in $\left(1 + \frac{1}{n}\right)^n$ are all positive, whereas the terms in $\left(1 - \frac{1}{n}\right)^n$ alternate. So the difference between the two limits is going to be at least 2.

7
On

The point is that $1-\frac{1}{n}$ is less than $1$, so raising it to a large power will make it even less-er than $1$. On the other hand, $1+\frac{1}{n}$ is bigger than $1$, so raising it to a large power will make it even bigger than $1$.


There's been some brouhaha in the comments about this answer. I should probably add that $(1-\epsilon(n))^n$ could go to any value less than or equal to $1$, and in particular it could go to $1$, as $n$ increases. It so happens that in this example, it goes to something less than $1$. The reason it goes to something less than $1$ is because we end up raising something sufficiently less than $1$ to a sufficiently high power.

4
On

Let me try. Consider $$A=\left(1+\frac{a}{n}\right)^n$$ Take logarithms $$\log(A)=n\log(1+\frac a n)$$ Now, when $x$ is small, by Taylor, $$\log(1+x)=x-\frac{x^2}{2}+O\left(x^3\right)$$ Replace $x$ by $\frac{a}{n}$. This makes $$\log(A)=n \Big(\frac{a}{n}-\frac{a^2}{2 n^2}+O\left(\frac{1}{n^3}\right)\big)=a-\frac{a^2}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, $$A=e^{\log(A)}=e^a-\frac{a^2 e^a}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, play with $a$.

Hoping that this makes things clearer to you.

2
On

Intuitively,

$$1-\frac1n\approx\frac1{1+\dfrac1n}.$$

For example,

$$0.99999=\frac1{1.000010000100001\cdots}\approx \frac1{1.000001}$$ so that

$$0.99999^{100000}=0.36787760177\dots=\frac1{2.7182954100\cdots}\\ \approx \frac1{1.000001^{100000}}=\frac1{2.7182682371\cdots}$$


More rigorously,

$$\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\left(1-\frac1{n^2}\right)^n=\sqrt[n]{\left(1-\frac1{n^2}\right)^{n^2}}.$$

As the expression under the radical goes to a finite value, the $n^{th}$ root goes to one.

You can also use the binomial formula,

$$\left(1-\frac1{n^2}\right)^n=1-\frac n{n^2}+\frac{(n)_2}{2n^4}-\frac{(n)_3}{3!n^6}\cdots\to1$$ ($(n)_k$ is the falling factorial).

0
On

Let me offer you the synopsis of a proof: (you can find the full proof in Apostol)

First of all you need to show that for any $a \in \mathbb{R}$, the sequence of the form $(1+\frac{a}{n})^n$ converges to a number say G(a).

Next you show that the function $G(a)$ is of the form $p^a$ where $p$ is some fixed number.

To find $p$ all you have to do is find $G(1)$ which is the limit of the sequence $(1+\frac{1}{n})^n$.

0
On

The true issue is not why changing the sign has such an impact, it is why adding such a small quantity as $\dfrac1n$ drastically changes the result.

$$1^n\to1\text{ vs. }\left(1+\frac1n\right)^n\to e$$

(and very similarly $\left(1-\frac1n\right)^n\to e^{-1}$.)

The reason is that the tiny quantity gets multiplied over and over so that it becomes a finite quantity,

$$\left(1+\frac1n\right)\left(1+\frac1n\right)\left(1+\frac1n\right)\cdots=1+\frac1n+\frac1n+\frac1n+\cdots>2$$ as there are $n$ terms $\dfrac1n$ (and yet others). The "tininess" of the terms is well compensated by the amount of terms.

Also notice that the "asymmetry" shown by $e-1\ne 1-e^{-1}$ is just due to the non-linearity of the exponential.

1
On

Actually you have the stronger true statement that $$ \lim_{x\to0}(1+x)^{1/x}=e, $$ of which the initial limit you stated is a special case, approaching $0$ through the sequence of values $x=\frac1n$ for $n\in\Bbb N_{>0}$. But if you approach $0$ through the sequence of values $x=-\frac1n$ for $n\in\Bbb N_{>1}$, the same limit gives you $$ \lim_{n\to\infty}\left(1-\frac1n\right)^{-n}=e. $$ Now it is a simple matter to see that the sequence of inverses $\left(1-\frac1n\right)^n$ tends to the inverse value $e^{-1}$.

It should be noted that while the first limit above is more general than the limits for $n\to\infty$, it is also less elementary to define, since it involves powers of positive real numbers with arbitrary real exponents. Introducing such powers requires studying exponential functions in the first place, which is why the limit statement with integer exponents is often preferred. But the more general limit statement is true, and can serve to give intuition for the relation between the two limits in your question.

2
On

Logarithms were invented (discovered?) by John Napier before there was calculus and before a generalized theory of exponents. It was found that you can find approximate logs to a base very close to $1$ by calculation, for example, by repeated squaring, and other short-cuts. For example if $b=1.000,001$ then $b^x$ is about $2$, where $x=693 147$, so $\log_{1.000 001}$ is about $693,147.$ The motivation for logs was for calculation, replacing $\times$ with $+$ by using tables of logs and anti-logs.

Logs to base $1+1/n$ could be "normalized" by dividing them by $n.$ (So $\log 2$ normalized always is about $ 0.693147$ .) The number $e=2.71828...$ kept showing up as the approximate "normalized" anti-log of $1 $ in base $(1+1/n)$ for any large $n$. Which is because $2.71828....=\lim_{n\to \infty}(1+1/n)^n.$

It was found that if $f(x)=\int_1^x (1/t)\;dt$ for $x>0,$ then $f(a b)=f(a)+f(b),$ that is, $f$ is a logarithm. And that its base $b$, which satisfies $1=\log_b b=\int_1^b (1/t)\;dt$ is that same number, so we could take the def'n of e as the solution $x$ to $f(x)=1.$

We can take any other equation or formula that has $e$ for its unique solution as the def'n of $e$.

(But defining it as the unique $x>1$ such that $\int_{-\infty}^{\infty} x^{-t^2}\;dt=\sqrt \pi$ is not advisable even though it's a true equation.)

11
On

Here's a useful generalization of the limit definition of $e$ from the OP:

Given

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Raise both sides to the power of $x$:

$$e^x = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}$$

This is trivially true when $x = 0$, as both sides evaluate to 1

Assume $x \ne 0$ and let $m = nx$, i.e., $n = \frac{m}{x}$

As $n\to\infty, \, m\to\infty$

$$e^x = \lim_{m\to\infty}\left(1+\frac{x}{m}\right)^{m}$$

[Note the similarity between this and the first limit in Marc van Leeuwen's answer].

In particular, for $x = -1$

$$e^{-1} = \lim_{m\to\infty}\left(1+\frac{-1}{m}\right)^{m}$$

or

$$e^{-1} = \lim_{m\to\infty}\left(1-\frac{1}{m}\right)^{m}$$


As mathmandan notes in the comments, my derivation is flawed when $x < 0$, since then $n\to\infty \implies m\to -\infty$ :oops:

I'll try to justify my result for negative $x$ without relying on the fact that $e^x$ is an entire function and that there is only a single infinity in the (extended) complex plane.

For any finite $u, v \ge 0$, we have

$$e^u = \lim_{n\to\infty}\left(1+\frac{u}{n}\right)^{n}$$

and

$$e^v = \lim_{n\to\infty}\left(1+\frac{v}{n}\right)^{n}$$

Therefore,

$$e^{u-v} = \lim_{n\to\infty}\left(\frac{1+\frac{u}{n}}{1+\frac{v}{n}}\right)^{n}$$

Let $m = n + v$. For any (finite) $v$ as $n\to\infty, \, m\to\infty$.

$$\begin{align}\\ \frac{1+\frac{u}{n}}{1+\frac{v}{n}} & = \frac{n + u}{n + v}\\ & = \frac{m + u - v}{m}\\ & = 1 + \frac{u - v}{m}\\ \end{align}$$

Thus $$\begin{align}\\ e^{u-v} & = \lim_{n\to\infty}\left(1+\frac{u - v}{m}\right)^{n}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{m-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m\\ \end{align}$$

since

$$\lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v} = 1$$

In other words,

$$e^{u-v} = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m$$

is valid for any finite $u, v \ge 0$. And since we can write any finite $x$ as $u-v$ with $u, v \ge 0$, we have shown that

$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^{n}$$

is valid for any finite $x$, so

$$e^{-x} = \lim_{n\to\infty}\left(1+\frac{-x}{n}\right)^{n}$$
And hence $$e^{-x} = \lim_{n\to\infty}\left(1-\frac{x}{n}\right)^{n}$$

0
On

If you take $(1-1/n)^n$, the result is obviously less than 1 for every n. So it is absolutely obvious that there cannot be a limit greater than 1.

If you take $(1+1/n)^n$, your argument "1/n gets smaller and smaller" still applies. So if that sequence has a limit of e ≈ 2.718 which is greater than 1, then it is a priori unreasonable to argue "-1/n gets smaller and smaller" as evidence that this sequence cannot have a limit significantly less than 1.

1
On

Consider that $e \times \frac{1}{e} = 1$. In our case, the $\frac{1}{n^2}$ is too small.

$$ \lim_{n \to \infty} \left( 1 - \frac{1}{n} \right)^n \cdot \lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n = \lim_{n \to \infty} \left( 1 - \frac{1}{n^2} \right)^n \to 1$$

0
On

Going off from cactus314's answer,

$$\lim_{n\to\infty}\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=1$$

So we really only have to prove the right side:

$$\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=\lim_{n\to\infty}\left(\left(1-\frac1{n^2}\right)^{n^2}\right)^{1/n}$$

$$=\lim_{n\to\infty}e^{1/n}$$

$$=e^0=1$$

0
On

The first definition of $e$ is $$ \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^{n} = e^1 $$ which is basically just answering the question of what happens if you take the limit of discrete compounded growth by 100% to continuous growth. Note that $e>2$, i.e. this limit of discrete compounded growth asymptotically approaches a value that is greater than one we would have arrived at with the initial rate of growth. This means that even though we're chipping away at the rate we're growing by with every step of compounding, the aggregate effect is more growth. Also, I just want to note that $e$ is the universal constant from continuous growth by a certain rate - meaning that simply raising $e^{rt}$ will give the effect of continuously growing at a rate $r$ for $t$ units of time.

If we decide instead to see what happens when we take the limit of discrete compounded decay instead of growth, $$ \lim_{n \to \infty} \left( 1- \frac{1}{n} \right)^n = e^{-1} $$ we see that the opposite happens. Going from discrete compounded decay to instantaneous decay lessens the amount to which we are decaying by with every step of compounding, and asymptotically approaches a value $1/e> 0$, the value we would have been at if we did a single step of decay at a rate of 100%.

Don't know if this helps at all.

1
On

If you know that $$\lim_{n \to \infty}\left(1 + \frac{1}{n}\right)^{n} = e\tag{1}$$ (and some books / authors prefer to define symbol $e$ via above equation) then it is a matter of simple algebra of limits to show that $$\lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n} = \frac{1}{e}\tag{2}$$ Clearly we have \begin{align} L &= \lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\left(\frac{n - 1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(\dfrac{n}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n - 1}\cdot\dfrac{n}{n - 1}}\notag\\ &= \frac{1}{e\cdot 1}\notag\\ &= \frac{1}{e} \end{align} Using similar algebraic simplification it is possible to prove that $$\lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = e^{x}\tag{3}$$ where $x$ is a rational number. For irrational/complex values of $x$ the relation $(3)$ holds, but it is not possible to establish it just using algebra of limits and equation $(1)$.

Regarding the intuition about "changing a tiny component in limit expression changes the output" I think it is better to visualize this simple example. We have $$\lim_{n \to \infty}n^{2}\cdot\frac{1}{n^{2}} = 1$$ and if we change the second factor $1/n^{2}$ with $(1/n^{2} + 1/n)$ then we have $$\lim_{n \to \infty}n^{2}\left(\frac{1}{n^{2}} + \frac{1}{n}\right) = \lim_{n \to \infty} 1 + n = \infty$$ The reason is very simple. The change of $1/n$ which you see here is small but due to the multiplication with other factor $n^{2}$ its impact its magnified significantly resulting in an infinite limit. You always calculate the limit of the full expression (and only when you are lucky you can evaluate the limit of a complicated expression in terms of limit of the sub-expressions via algebra of limits) and any change in a sub-expression may or may not impact the whole expression in a significant way depending upon the other parts of the expression.

1
On

$$(1-\frac{1}{n})^n=(\frac{n-1}{n})^n=(\frac{n}{n-1})^{-n}=(\frac{n-1+1}{n-1})^{-n}$$ $$=(1+\frac{1}{n-1})^{-n}=\frac{1}{(1+\frac{1}{n-1})^{n}}=\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}$$

Now take the limit

$$\lim_{n\to \infty}(1-\frac{1}{n})^n=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}}\cdot\lim_{n\to \infty}\frac{1}{1+\frac{1}{n-1}}=\frac{1}{e}$$