I am trying to prove the exercise on page 3 of http://depts.washington.edu/bdecon/workshop2012/g_stability.pdf. This question was already asked before here: Linear instability implies nonlinear instability. In the proof contained in the answer there, we let $Lv=\lambda v$ with $\mathcal{Re}\lambda > 0$ and let $\|\cdot\| = \|\cdot\|_{L^{\infty}([0,t),(X,\|\cdot\|))}$. The following estimate is then made:
$$\|u(t)-e^{Lt}\delta v\|\leq\|u(t)\|^2\int_0^te^{(\lambda+\epsilon)(t-s)}ds\leq\frac{2}{\mathcal{Re}\lambda}\|u(t)\|^2$$ for $\epsilon$ sufficiently small. My question is, how is this second inequality true? This inequality seems to imply that $\int_0^te^{(\lambda+\epsilon)(t-s)}ds\leq\frac{2}{\mathcal{Re}\lambda}$, but one can easily evaluate the integral to find that $\int_0^te^{(\lambda+\epsilon)(t-s)}ds=\frac{e^{(\lambda+\epsilon)t}-1}{\lambda+\epsilon}$, which is growing exponentially in time for any $\epsilon >0$. This is claimed to be true for all $t\geq 0$, but obviously it cannot be true if the left hand side is growing exponentially and the right hand side is constant. If someone could explain what step I am missing here or provide their own proof of this claim, I would greatly appreciate it.
Author of referenced question writes:
And later we get:
Note that we have extra subscripts by the norm symbols. This is to indicate that these inequalities hold only for some initial amount of time (i.e. they should be interpreted as for sufficiently small $t$)