Can continuity always be shown by using ε-δ?

288 Views Asked by At

When we learn calculus we usually:
1. Prove that polynomials, the exponential functions, the logarithmic functions, the trigonometric functions, the inverse trigonometric functions are continuous on its domain.
2. Prove that continuity of any two functions at a point will be preserved, in the resultant function, after taking finitely many times of the four arithmetic operations and composition.
∴ Elementary functions are continuous on its domain.

Question:
For every elementary function $f$, and every point $p$ in the domain of $f$, and every $\epsilon > 0$,
can we always find a $\delta > 0$ explicitly as a closed-form expression (including "the maximum function") in terms of $f$, $p$ and $\epsilon$ such that $| x - p | < δ \implies | f(x) - f(p) | < ε$ ?
(In other words, can we always construct at least one valid $\epsilon-\delta$ style argument showing $f$ satisfies the definition of being continuous at $p$?)

After all, elementary functions are infinitely many, but human beings only have finite amount and time.

At the moment the definition of "elementary functions" follows this webpage:
https://www.encyclopediaofmath.org/index.php/Elementary_functions

3

There are 3 best solutions below

9
On

The $ \epsilon - \delta $ type argument you describe is not just an argument, it is the definition of a limit. If this argument does not work then you can not meaningfully assign any value to $ \lim_{x \to p} f(x) $.

EDIT: Your question was edited, and I will add to mine to reflect this. You have already accepted an answer that gave a sketch of a possible solution, so I will try and help formalize it.

We let $ E $ denote the set of elementary functions, as per the definition in your link. We want some number to represent the complexity of an elementary function, which will allow us to use induction and conclude. First say an elementary function is atomic if it is one of the basic ones described in the link in your post, such as polynomials, exponentials, etc. We assign this number by noting that any function $ f \in E $ has a corresponding string of symbols like $ f_1 \circ f_ 2 + f_3 $ consisting of $ f_i $ denoting some atomic function and of finite length. For ease of our induction, we note that we do not need to include subtraction as an operation, as we can replace any instance of substraction with addition and multiplication by -1. Indeed, if we also extend the definition of an atomic function to include the functions functions $ x \mapsto x^{-1} $ we do not need to consider division a separate operation. Note that the string above then contains two operations, $ \circ $ and $ + $. We then define $ \operatorname{rank} f $ to be the minimal number of operations used in any such string representation of $ f $. The discussion above shows that such a string representation exists for any $ f \in E $, so this minimum will exist as the natural numbers are well ordered.

Now use the idea of the other user's answer to perform an induction on $ \operatorname{rank} f $. Note that for the base case, the elementary functions with rank 0 are precisely the atomic ones, for which there exist standard arguments finding such $ \delta $. In the inductive step, note that any function of rank $ n + 1 $ can be written as $ f + g $, $ f \cdot g $, or $ f \circ g $ where $ \operatorname{rank} f + \operatorname{rank} g = n $. From here, using the standard argument that the continuous functions are closed under addition, multiplication, and composition.

There are some tricky details to check if you'd like to be more fully rigorous. To handle these, you may consider using ideas from mathematical logic, in which talking about strings of function symbols is standard.

2
On

Take a look at a proof for the theorem "If $f$ and $g$ are continuous at $x_0$ then $f+g$ is continuous at $x_0$". It will almost surely take an arbitrary $\epsilon \gt 0$, get a $\delta _1$ from $f$ and a $\delta_2$ from $g$, and then say how to combine $\delta_1$ and $\delta_2$ to get a $\delta_0$ that will work for $f+g$. There are similar "combining $\delta$" rules for the three other arithmetical operations and for composing functions too, though they are more complicated than the rule for dealing with $f+g$.

This means that if you look at how your elementary function is built out of your basic function, you can take the $\delta$s from the basic function, apply the rules from the above-mentioned theorems, and get a valid $\delta$ for the original function.

This is obviously a very rough outline, and if I was going to formalize it I'd expect to create a top-down parse tree showing how any elementary function was composed out of the basic functions, and then I'd expect to apply the $\delta$-combining rules from the bottom up, but I think this would show how you could always come up with an $\epsilon$-$\delta$ continuity proof for any elementary function.

1
On

Let me use $\delta(f,p,\epsilon)$ to denote the desired closed form expression.

Explicit formulas for $\delta(f,p,\epsilon)$ arise by use of induction. The basis step uses the very simplest functions $f$. The inductive step has several cases, one for each limit theorem that tells how to get new continuous functions from old ones; the formula can be extracted from the proof of the theorem.

It's not quite as simple as that, though, because as one goes along one discovers that one needs other kinds of inductive formulas, for example one needs an inductive formula for bounds on absolute values.

For the basis step, I'll just use two functions:

  • For the constant function $f(x)=c$, let $\delta(f,p,\epsilon)=1$.
  • For the identity function $f(x)=x$, let $\delta(f,p,\epsilon)=\epsilon$.

For the inductive step, I'll start with the easy one, the formula for $\delta$ of a sum of two functions:

  • $\delta(f+g,p,\epsilon)=\text{min}\{\delta(f,p,\epsilon/2),\delta(g,p,\epsilon/2)\}$

Next I'll do one hard one, namely the formula for $\delta$ of a product of two functions. If you look at the proof that the limit of a product is the product of limits, the key step is this inequality: $$|f(x) \cdot g(x) - f(a) \cdot g(a)| \le |f(x)| \cdot |g(x)-g(a)| + |g(a)| \cdot |f(x)-f(a)| $$ From this inequality, one can see the need for an upper bound on $|f(x)|$. For this purpose, we also need formulas for domain intervals $\text{domain}(f)$, $\text{domain}(g)$, and from those one needs to derive a formula for the endpoints of a finite open interval $(c,d)=(c(f,g),d(f,g))$ which is contained in $\text{domain}(f) \cap \text{domain}(g)$. I'll assume that these formulas for $c$ and $d$ are available. Now let $$\delta_1 = \delta_1(f,g,p) = \min\left\{\frac{|p-c(f,g)|}{2},\frac{|p-d(f,g)|}{2}\right \} $$ which guarantees that $[p-\delta_1,p+\delta_1] \subset (c,d)$.

Next, one needs an explicit formula for an upper bound $B(f,p,\delta_1)$ so that $$|f(x)| \le B(f,p,\delta_1) \quad\text{for all $x \in [p-\delta_1,p+\delta_1]$} $$ Finally we have:

  • $$\delta(f \cdot g,p,\epsilon) = \min\{\delta_1(f,g,p),\delta(g,p,\frac{\epsilon}{2B(f,p,\delta_1(f,g,p))}), \delta(f,p,\frac{\epsilon}{2|g(p)|})\} $$

Let me now just make a big summary of how this project could be finished.

As you go through the list of limit theorems, besides inductive formulas for $\delta(f,p,\epsilon)$ one needs also inductive formulas for endpoints of domains, inductive formulas for upper bounds of absolute values on closed intervals, and other such inductive formulas (for example, to get the formula for $\delta$ of a multiplicative inverse, you'll need formulas for positive lower bounds of the absolute value of a positive function). Each time, you look at the proof of a limit theorem, and you translate that proof into the desired formula for $\delta$, just as I have done for the limit of a product.

What kinds of limit theorems does one need to get all the elementary functions that you linked in your question? You can get the complete list of elementary functions (and a lot, lot more) starting from the constant function and the identity function, by using inductive formulas for the following limit theorems:

  • limit of a sum and product (already done above);
  • limit of a multiplicative inverse (then you get the limit of a quotient by combining this with limit of a product);
  • limit of an inverse function (for square roots and other uses);
  • limit of an indefinite integral (for inverse trig functions and logarithms).

You can then get the formulas for trig functions by applying inverse function formulas to inverse trig functions, and you can get the formulas for exponential functions by applying inverse function formulas to logarithms.