Understanding The Rigorous Definition Of Continuity

1.2k Views Asked by At

I have an intuitive understanding of continuity, however, I’m struggling to understand the rigorous definition.

I understand that a function is continuous when you can get infinitely close to each point of it, until you arrive at it. Basically, you can draw the whole thing without lifting your pencil.

I’m struggling to see how, however, this has been reflected in the rigorous definition lim x-> c f(x) = f(c). It seems to me that the rigorous definition is just saying that as x gets closer to c, the limit of f(x) will be f(c). Or rather, as we sub in values of x that are infinitely close to c, the value of the function becomes infinitely close to the value of f(c). But this definition is true even when the functions aren’t continuous! For instance, this definition is true for a function with a point discontinuity.

So can someone explain to me, what the formal definition means, how it is only true for continuous functions, and how it is compatible with the 2 intuitive definitions I wrote above? Thank you.

Can you also not make the explanation too rigorous? I’m just learning Khan Academy Calculus, and still haven’t touched on things like epsilon delta proofs yet.

2

There are 2 best solutions below

0
On BEST ANSWER

"For instance, this definition is true for a function with a point discontinuity."

No, it is not. Try a function with what you are describing as a point discontinuity: $$ f(x)=\begin{cases}0,&x<0\\ 1,&x=0\\ 0,&x>0 \end{cases} $$ What happens at the origin? As we "sub in values" on the right or left, which are all $0$, are they actually all that close to $f(0)=1$?

To be fully rigorous, one would need to introduce $\delta$'s and $\epsilon$'s, after all that's how we really make sense of the symbols $$ \lim_{x\to c}f(x) $$ My opinion: The symbols and logical statements may seem daunting, but I actually think it can avoid some of the early confusions around the notation, rendering learning them well worthwhile, not to mention essential for higher math.

As an illustration, it would be easy to see that the above function, and any function with a jump discontinuity like the above, is not continuous using the following definition: $f$ is continuous at $c$ if for any level of precision $\epsilon>0$ we can find a corresponding $\delta>0$, with $|x-c|<\delta$ ensuring that $|f(x)-f(c)|<\epsilon$.

Note that the $\delta$ neighborhood is encapsulating the sampling you are talking about, and the $\epsilon>0$ is encoding how close we need the values of $f$ at all these sample points to be.

Well, let's pick a level of precision $\epsilon=1/2$ for our example. Is there any positive $\delta$ we can pick to make sure that $|x|<\delta$ means that $|f(x)-f(1)|<1/2$ ore equivalently, $1/2<f(x)<3/2$?

0
On

The problem you have comes most likely from your background and hence the functions you imagine when saying discontinuous.

The easiest examples of discontinuous functions are indeed those that can be drawn with only one jump, e.g. $f(x)=sign(x)$ i.e. $f(x) = -1$ when $x<0$ and $f(x) = 1$ if $x\geq 0$ (yes, I now for simplicity define $sign(0)=1$). For this funciton you can find infinitely close point of $0$ that will be mapped infinitely close to $f(0)$. However all these points are positive, so you can't do this from both sites.

Now with the following (rather useless) function this is not possible:

$f(x)=x \Leftrightarrow x\neq0$ and $f(0)=10^{10}$

I hope this example solves your problem.

Note that there are very weird functions out there, some of them even being nowhere continuous (i.e. you really can't draw them) and some of them being dense, the graph of such a function would be essentially a black page.