After reading through the section of my Calculus textbook about the epsilon delta definition of a limit, I thought I had a pretty good grasp of what it meant... until I started looking into the more complex epsilon delta proofs. Here's an example of such a proof that stumped me (I put my thoughts in between the proof in bold):
Prove that if $\lim\limits_{x\to c}f(x)=\infty$ and $\lim\limits_{x\to c}g(x)=L$, then $\lim\limits_{x\to c}[f(x)+g(x)]=\infty$
To show that the limit of $f(x)+g(x)$ is infinite, choose $M>0$. You then need to find $\delta>0$ such that $$[f(x)+g(x)]>M$$whenever $0<|x-c|<\delta$.
I'm pretty much already lost starting here. I do not understand what $M$ is doing and how it is related with the limit. The book never really deals with infinite limits and how to represent them with $\varepsilon$-$\delta$ definition until this point, so if someone could elaborate a little more on this, that would be great.
For simplicity's sake, you can assume $L$ is positive and let $M_1=M+1$. Because the limit of $f(x)$ is infinite, there exists $\delta_1$ such that $f(x)>M_1$ whenever $0<|x-c|<\delta_1$.
Could someone also explain this part? What I'm really not understanding is: How does the $\varepsilon$-$\delta$ definition apply to an infinite limit? What is the proof trying to achieve with $M$ and $M_1=M+1$?
Also because the limit of $g(x)$ is $L$, there exists $\delta_2$ such that $|g(x)-L|<1$ whenever $0<|x-c|<\delta_2$.
This mostly makes sense to me, except isn't it supposed to be $|g(x)-L|<\color{red}{\varepsilon}$, not $|g(x)-L|<\color{red}1$?
By letting $\delta$ be the smaller of $\delta_1$ and $\delta_2$, you can conclude that $0<|x_c|<\delta$ implies $f(x)>M+1$ and $|g(x)-L|<1$.
I'm confused on why they chose $\delta$ to be the minimum of $\delta_1$ and $\delta_2$. What is the logic behind doing that? Also, I do not understand how they got the following statement after that from choosing $\delta$ to be that particular value.
The second of these two inequalities implies that $g(x)>L-1$, and, adding this to the first inequality, you can write $$f(x)+g(x)>(M+1)+(L-1)=M+L>M.$$ Thus, you can conclude that $$\lim\limits_{x\to c}[f(x)+g(x)]=\infty.$$ I actually understand how they got the last inequality by combining the previous inequalities. My main problem is that I don't understand how they got those previous inequalities.
Overall, if someone could provide clarification on the comments I put in the proof, and maybe also provide an intuitive explanation of what the proof is doing (especially the $M$ and $M_1$ part), that would be greatly appreciated.
For $c\in \Bbb R,$ the exact meaning of $\lim_{x\to c}f(x)=\infty$ is $$\forall M'\in \Bbb R\,\exists \delta'\in\Bbb R^+ \,(0<|x-c|<\delta'\implies f(x)>M').$$ And $\lim_{x\to c}g(x)=L\in\Bbb R$ means $$\forall \epsilon\in\Bbb R^+\,\exists \delta''\in\Bbb R^+ \,(0<|x-c|<\delta''\implies |g(x)-L|<\epsilon).$$ We will only need the case $\epsilon=1.$
For any $M\in\Bbb R,$ let $M'=M+|L|+1.$ Take $\delta'\in\Bbb R^+$ such that $0<|x-c|<\delta'\implies f(x)>M'.$
Take $\delta''\in\Bbb R^+$ such that $0<|x-c|<\delta''\implies |g(x)-L|<1.$
Let $\delta=\min( \delta',\delta'').$
Now $0<|x-c|<\delta\implies -|g(x)|>-|L|-1$ so we have $$0<|x-c|<\delta\implies f(x)+g(x)\ge f(x)-|g(x)|>$$ $$>f(x)-|L|-1>$$ $$>M'-|L|-1=$$ $$=(M+|L|+1)-|L|-1=M.$$