I'm studying Laplace transform right now and I'm trying to prove the following theorem:
Let $k\in$ ℕ, $f(x)$ has $k$ continuous derivatives and each of them is locally integrable and has its own constant $c_n$ such that $f^{(n)}e^{-c_nx}$ is integrable over (0, ∞). Then $$ (f^{(k)})(p) = p^k(f)(p)\,-\,\sum_{j=0}^{k-1}p^jf^{k-1-j}_+(0) $$ for $\text{Re}\{p\}>\text{max}\{ c_n\}$
The proof is a straight forward induction, starting with $f'$, where via per partes integration you arrive at 
Now it would seem logical to me, that the leftmost term must be zero, for with $\text{Re}\{p\}>\text{max} c_n$ it is integrabe and no integrable function can go to any nonzero value at ∞ (otherwise the integral would be infinite). But in my materials this isn't the end of the proof at all! The author goes on to show that
$$
\psi_p := e^{-pt}f(t)
$$
has for a fixed $p$ large enough integrable derivative and then invokes

to show Boltzano-Cauchy condition holds. Why do I need that? How is it not always fufilled for integrable functions?
Turns out there can be a function, that's nonzero on a countably infinite union of sets that get smaller as $x \rightarrow$ ∞ similar to this:
This function surely doesn't have a limit zero at infinity, but as the $x$ gets larger, only smaller and smaller intervals contribute to the integral. It then turns into basically an infinite sum and you surely can make the function such that they satisfy the theorem's requirements and that their individual integrals behive like, for instance $\sum \frac{1}{n^2}$