Difference between approaching and being exactly a number

2.7k Views Asked by At

When we take a limit, we say that the value is never equals that number, but approaches it, like in $$\lim_{n\to\infty}\frac{1}{n} = 0.$$ It never reaches $0$, but becomes closer and closer to $0$.

In this case, isn't it wrong to say things like:

$$2 = 1 + \frac{1}{2} + \frac{1}{4}+\frac{1}{8} + \frac{1}{16}+\cdots $$ Or that a derivative of $\sin(x)$ is $\cos(x)$ since the limit of $$\frac{\sin(x+\Delta x)-\sin(x)}{\Delta x}$$ with $\Delta x$ approaching $0$ is never equal to $\cos(x)$ but infinitely closer?

Is there a good article about this that I can read and understand it better?

6

There are 6 best solutions below

5
On

The point is that the limit is exactly the operation that takes a sequence $x_n$ approaching $x$ and gives you $x$. That is, when we say $$\lim_{n \to \infty} x_n = x,$$ we mean that $x_n$ converges to $x$ as $n\to \infty$; this does not claim that $x_n = x$ for any $n$. In your sum, the ellipsis $\cdots$ implies a limiting process: this equation could be more formally written

$$ 1 = \lim_{N \to \infty} \sum_{n=1}^N \frac 1 {2^n}.$$

Note that none of these finite sums is equal to $1$, but they approach $1$, so we say their limit is 1.

2
On

No, limit of $$\frac{\sin(x+\Delta x)-\sin(x)}{\Delta x}$$ with $\Delta x$ approaching $0$ is equal to $\cos(x)$. But $\frac{\sin(x+\Delta x)-\sin(x)}{\Delta x}$ is never equal to $\cos(x)$. and also note that $\lim_{n\rightarrow\infty}\sum_{k=1}^n\frac{1}{2^k}=\frac{1}{2} + \frac{1}{4}+\frac{1}{8} + \frac{1}{16}+\cdots$, but $s_n=\sum_{k=1}^n\frac{1}{2^k}$ is never equal to $2$.

5
On

Let's look at your sentence

It never reaches $0$, but becomes closer and closer to $0$.

This is imprecise, and this is at the center of your confusion. What is "it"? There is an important distinction to be made:

  • The sequence $1,\frac{1}{2},\frac{1}{3},\ldots$ never reaches $0$, but becomes closer and closer to $0$.

  • Let $L$ be the limit of the sequence $1,\frac{1}{2},\frac{1}{3},\ldots$. Then $L$ is exactly equal to $0$.

1
On

Imagine that we say $\forall \epsilon>0 :|x-y|<\epsilon$. I want to claim that if this is true for any arbitrary positive $\epsilon$ then it forces $|x-y|$ to be zero.

Suppose that $|x-y|$ is not zero but it is very very small, so, there exists a natural number $n$ such that

$$\exists n\in\mathbb{N}:\displaystyle \frac{1}{10^{-(n+1)}} <|x-y|< \frac{1}{10^{-n}}$$

Now we can set $\displaystyle \epsilon<\frac{1}{10^{-(n+1)}}$ and that contradicts $|x-y|<\epsilon$. So, because $|x-y|<\epsilon$ is true for any $\epsilon>0$, no matter how small $\epsilon$ is we know $|x-y|=0 \implies x-y=0 \implies x=y$. This is because there is no smallest positive real number. And because $|x-y|$ is non-negative, then if it's not zero, we'll have a contradiction, so it must be zero.

This is true for limits too. When you say $\displaystyle 2=1+\frac{1}{2}+\frac{1}{4}+\cdots$ this is an equality. But if you consider only a finitely many terms of this series then that is only an approximation of $2$.

0
On

when you are approaching a number(a->5), it means that you are hypothetically so close to the number, that it is practically possible to treat it =5. derivative means a small change in x, with respect to a small change in y.

since y=sin(x). and we know that slope(derivative in nothing but finding the slope) =(y2-y1)/(x2-x1),therefore dy/dx=(sin(x+small change in x)-sin(x))/small change in x

0
On

Sometime early on, we learn about binary sums: things like 3+7.

Soon after, we learn ternary sums: things like 7+13+8. What could this possibly mean? Well, we might say it means we first compute 7+13 to get 20, then 20+8 to get 28. Or maybe we'll mean we should add 13+8 to get 21, then 7+21 to get 28. Of course, we quickly learn that we get the same answer either way, no matter what the numbers are.

But one day, we face a quaternary sum: something like 1+2+3+4. What could this mean? Well, we can give the same solution: we keep applying binary sums to pairs of numbers until we get one left.

This method will continue to work with 5-ary sums, 6-ary sums, or even $n$-ary sums for any natural number $n \geq 1$. (and it is fruitful to define a 0-ary sum too, but that's another topic)

Then one fateful day, we see a sum like $$ 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots $$ What could this possibly mean? We might try to adopt the familiar convention that we add numbers in pairs until we have one left, but that doesn't work here. While most approaches to the problem would say that this sum has the same value as $$ \frac{3}{2} + \frac{1}{4} + \frac{1}{8} + \cdots $$ we still have infinitely many terms: we haven't gotten any closer to getting a single number.

So we need to understand an infinite sum in some other fashion. We might even be interested in there being multiple different ways to understand it, depending upon just what we are trying to use them for.

Introductory calculus gives the first rigorous definition most people encounter. We define a sequence of partial sums:

  • 1
  • $1 + \frac{1}{2}$
  • $1 + \frac{1}{2} + \frac{1}{4}$
  • $1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8}$
  • $\vdots$

and then we "the value of a calculus-style infinite sum" means the same thing as "the limit of this sequence".

Other summations methods are possible. Even different approaches to the calculus-style sum are possible.

For example, if we have some multi-set of positive numbers $X$, we might want to define their sum. Because we desire that if $S \subset X$, we should insist that the sum of $S$ be less than the sum of $X$, we could make the following definition:

  • $\sum X$ is defined to be the smallest real number with the property that $\sum S \leq \sum X$ for every finite subset $S \subseteq X$.

where $\sum S$ for a finite set $S$ is defined in the usual way (repeatedly add numbers pairwise until you have one left).

It turns out that $\sum X$ is the same value as the calculus-style infinite sum I described above.


Another approach is somewhat more algebraic in nature. We don't care what an infinite sum is, so long as it satisfies some 'obvious' properties: e.g.

$$ 1 + \frac{1}{2} + \frac{1}{4} + \cdots = 1 + \left( \frac{1}{2} + \frac{1}{4} + \cdots \right) $$ $$ 2 \cdot \left(\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots \right) = \left(2 \cdot \frac{1}{2} + 2 \cdot \frac{1}{4} + 2 \cdot \frac{1}{8} + \cdots \right) $$

For the infinite geometric sum, we can then solve these equations to get

$$ 1 + \frac{1}{2} + \frac{1}{4} + \cdots = 2$$

without any notion of limits even being involved!