Proof of limit using change of variables, help with a minor detail

49 Views Asked by At

So I'm having a problem trying to make sense of a small detail in proving the following theorem:

To prove that if $\displaystyle\lim_{x\to a}f(x)=L$ and $\displaystyle\lim_{x\to b}g(x)=a$, then $\displaystyle\lim_{x\to b}f(g(x))=\lim_{y\to a}f(y)=L$, we need to find a $\delta>0$ for every $\epsilon>0$ such that $$|f(g(x))-L|<\epsilon\hspace{0.25in}\text{whenever}\hspace{0.25in}0<|x-b|<\delta$$ Since $\displaystyle\lim_{x\to a}f(x)=L$, there exists a $\delta_1>0$ such that $$|f(x)-L|<\epsilon\hspace{0.25in}\text{whenever}\hspace{0.25in}0<|x-a|<\delta_1$$ And, since $\displaystyle\lim_{x\to b}g(x)=a$, then we can choose $\delta$ in the following manner: $$|g(x)-a|<\delta_1\hspace{0.25in}\text{whenever}\hspace{0.25in}0<|x-b|<\delta$$ The proof goes like this: $0<|x-a|<\delta_1\Rightarrow|f(x)-L|<\epsilon$ only means that $x$ is a number that's within a distance of $\delta_1$ from $x=a$ such that plugging it into $f(x)$ gives you a number that's within a distance of $\epsilon$ from $L$. So assuming that $0<|x-b|<\delta$, it's true that $|g(x)-b|<\delta_1$, which means that you can plug $g(x)$ in $f(x)$ and it'll return a number that's within a distance of $\epsilon$ from $L$. Therefore, $|f(g(x))-L|<\epsilon$.

However, this proof forgets that $g(x)$ can equal $a$, even when $x\ne b$. When that happens, plugging that to $f(x)$, which we have not specified to be continuous at $x=a$, would return an undefined result if otherwise. Therefore, following this proof, assuming $0<|x-b|<\delta$ doesn't always assume $|f(g(x))-L|<\epsilon$, as long as there exists an $x\ne b$ within $\delta$ of $b$ such that $g(x)=a$. How can I make it work for all $x$ within $\delta$ of $b$?

1

There are 1 best solutions below

0
On

You can't make it work. The statement is false.

Take, for instance $f,g\colon\mathbb{R}\longrightarrow\mathbb R$ defined by$$f(x)=\begin{cases}1&\text{ if }x=0\\0&\text{ otherwise}\end{cases}\text{ and }g(x)=\begin{cases}0&\text{ if }x=\frac1n\text{ for some }n\in\mathbb N\\x&\text{ otherwise.}\end{cases}$$Then:

  • $\displaystyle \lim_{x\to0}f(x)=0$;
  • $\displaystyle \lim_{x\to0}g(x)=0$;
  • since $\displaystyle f\bigl(g(x)\bigr)=\begin{cases}1&\text{ if }x=\frac1n\text{ for some }n\in\mathbb N\\0&\text{ otherwise,}\end{cases}$ the limit $\displaystyle\lim_{x\to0}f\bigl(g(x)\bigr)$ doesn't exist.

This would not happen if the definition of $\displaystyle\lim_{x\to a}f(x)=l$ was$$(\forall\varepsilon>0)(\exists\delta>0):\lvert x-a\rvert<\delta\implies\bigl\lvert f(x)-l\bigr\rvert<\varepsilon.$$