To invert $\frac{s}{{(s - 2)(s + 3)}}$ one might split it as: $$ \frac{s}{{(s - 2)(s + 3)}}\; = \;\frac{A}{{(s - 2)}}\; + \;\frac{B}{{(s + 3)}} $$ solve for $A$ and $B$ and invert the fractions in the RHS.
Now let: $$ {e^x}\; = \;1\; + \;x\; + \;\frac{{{x^2}}}{{2!}}\; + \;\frac{{{x^3}}}{{3!}}\; + \;\frac{{{x^4}}}{{4!}}\; + \; \cdots $$ and therefore $\displaystyle \;\frac{{{e^{ - s}}}}{s}\;$ might be expressed as $$ \frac{{{e^{ - s}}}}{s}\; = \;\frac{1}{s}\; - \;1\; + \;\frac{s}{{2!}}\; - \;\frac{{{s^2}}}{{3!}}\; + \;\frac{{{s^3}}}{{4!}}\; + \; \cdots \tag{1} $$
if I'm not mistaken.
One might try to obtain an approximation for the inverse Laplace transform of $e^{-s}/s$ at some time $t$ by inverting a finite number of terms in the RHS, evaluating them at $t$ and finally summing them until convergence is achieved (after a few tens or hundreds of terms are summed). Yet this is apparently impossible, since most terms in Eq. (1) don't seem to be invertible. What's wrong with the approach exposed above?
I know that the inverse of $e^{-s}/s$ is well-known and can be found in any table of inverse Laplace transforms. Thus, my question is driven by pure curiosity. Thanks to anyone who could shed some light on this matter.
The signal we are working with is exponentially bounded by order $\alpha > 0.$ The Laplace Transform operator brings such a one-sided signal $f$ into a function $F: \mathbb{C}_{\geq\alpha} \to \mathbb{C}$ which is analytic on a half-plane contained in the open right half-plane and satisfying the property that,
$$\lim_{s \to \infty} F(s) = 0.$$
For rational transforms $F$, $F$ would be said to be strictly proper. Technically we need something stronger than that but we can ignore that detail for this discussion. Likewise, when considering the inverse transform, one must be sure their transform satisfies these technical requirements. Your original transform
$$F(s) = \frac{e^{-s}}{s}$$
does satisfy this requirement and so has a well-defined inverse transform. The problem is that any truncation of your series expansion does not: any (non-trivial) truncation of your series results in improper transforms that do not have a well-defined limit at $\infty.$ Without this, the contour integral which formally defines the inverse Laplace transform simply doesn't converge.
Instead, one has to approximate $e^{-s}$ using a sequence of proper rational functions which do satisfy the technical requirements while still, in the limit, converging upon $e^{-s}$. This is where the infamous Padé Approximant comes into play. Basically we attempt a rational polynomial approximation of the function (instead of a purely polynomial one) which we know is well-behaved enough to have a well-defined inverse transform. In our case, we really only need this approximation to be proper (not strictly proper) since you will multiply by $1/s$ anyway. For instance, with the $(1,1)$ approximant for $e^{-s}$ we get,
$$F(s) \approx \frac{1}{s} \frac{ 1 - \frac{1}{2} s}{1 + \frac{1}{2} s}$$
For any proper rational Padé approximation of $e^{-s}$, the approximation of $F(s)$ will be strictly proper and, as such, have a well-defined inverse transform. The approximation is a bit messy and takes quite a few terms (in this case) to converge. Instead of demonstrating it algebraically, here is a figure plotting the approximate signals (computed via the impulse response) corresponding to the various orders of Padé Approximants used for $e^{-s}.$
It is partly for this reason that Padé Approximants are used when performing classical control design against delays in the feedback loop.