Why do I need Boltzano-Cauchy in proving Laplace transform of a derivative?

45 Views Asked by At

I'm studying Laplace transform right now and I'm trying to prove the following theorem:

Let $k\in$ ℕ, $f(x)$ has $k$ continuous derivatives and each of them is locally integrable and has its own constant $c_n$ such that $f^{(n)}e^{-c_nx}$ is integrable over (0, ∞). Then $$ (f^{(k)})(p) = p^k(f)(p)\,-\,\sum_{j=0}^{k-1}p^jf^{k-1-j}_+(0) $$ for $\text{Re}\{p\}>\text{max}\{ c_n\}$

The proof is a straight forward induction, starting with $f'$, where via per partes integration you arrive at Screensot from my materials. Different symbol for the transform, but I hope you get it

Now it would seem logical to me, that the leftmost term must be zero, for with $\text{Re}\{p\}>\text{max} c_n$ it is integrabe and no integrable function can go to any nonzero value at ∞ (otherwise the integral would be infinite). But in my materials this isn't the end of the proof at all! The author goes on to show that $$ \psi_p := e^{-pt}f(t) $$ has for a fixed $p$ large enough integrable derivative and then invokes Inequality used to show Boltzano-Cauchy

to show Boltzano-Cauchy condition holds. Why do I need that? How is it not always fufilled for integrable functions?

1

There are 1 best solutions below

0
On BEST ANSWER

Turns out there can be a function, that's nonzero on a countably infinite union of sets that get smaller as $x \rightarrow$ ∞ similar to this:

enter image description here

This function surely doesn't have a limit zero at infinity, but as the $x$ gets larger, only smaller and smaller intervals contribute to the integral. It then turns into basically an infinite sum and you surely can make the function such that they satisfy the theorem's requirements and that their individual integrals behive like, for instance $\sum \frac{1}{n^2}$