I have the following initial value problem
$$ \frac{d\theta}{dt} = A(p-\theta) + B(\omega-\theta) $$ subject to the initial condition
$$ \theta(0)=\theta_0 $$
and the constitutive set of equations $$ \omega(t)=\theta(t)-h\phi(t) $$
$$ \phi(t)=-\sum_{j=1}^N C_j\int_0^t e^{-\lambda_j^2}(t-x)\omega'(x)dx $$
$$\omega(0)=0$$ where the prime denotes a temporal derivative, and $\theta_0$, $A$, $B$, $p$, and $h$ above are all real constants, as are the Fourier-Bessel coefficients $C_j$ and the eigenvalues $\lambda_j$.
With the use of the Convolution Theorem, and skipping details, the system above can be Laplace-Transformed to
$$ \Theta(s)=\frac{s\theta_0+Ap}{sF(s)} $$
where
$$ F(s)=\Big(1+hs\sum_{j=1}^N \frac{C_j}{s+\lambda_j^2}\Big)(s+A+B)-B $$
The denominator has a pole at $s=0$ and the $N$ other poles are the zeroes of $F(s)$ all of which lie on the Negative Real Axis between $0$ and $-\lambda_N^2$, and can be easily determined from a Bisection algorithm, as they all occur between the asymptotes to $\pm \infty$ at $s=-\lambda_j^2$ and the crossing of the real axis is due to the property $C_jC_{j-1}<0$ $\forall j$. Accordingly the solution in physical space can be written using the Cauchy Residue Theorem as
$$ \theta(t)=p+\sum_{j=1}^N \Big(\frac{\bar z_j \theta_0 +Ap}{\bar z_j F'(\bar z_j)}\Bigr)e^{\bar z_j t} $$
where the prime denotes a derivative w.r.t. $s$ so that
$$ F'(s)=\Big(1+hs\sum_{j=1}^N \frac{C_j}{s+\lambda_j^2}\Big)+h(s+A+B)-B\Big[\sum_{j=1}^N \frac{C_j}{s+\lambda_j^2}-s\sum_{j=1}^N \frac{C_j}{(s+\lambda_j^2)^2} \Bigr] $$
and $\bar z_j, j=1\cdots N$ are the zeroes of $F(s)$. So far, so good.
The Issue with the Solution:
The solution above does not satisfy the initial condition $\theta(0)=\theta_0$, despite this having been taken into account with the Laplace Transform of the derivative in the original ODE. The very short time solutions therefore are not accurate, but the solution does tend to the correct long term solution.
Interestingly, as the number of terms in the Fourier-Bessel series for the flux increases, the initial condition does tend to be satisfied. But shouldn't it be satisfied regardless? Am I missing some crucial detail?
If I use a function-sampling method like Gaver-Stehfest, the inverse transform yields the correct short term solutions even at very short elapsed times from $t=0$. See the graphic below
Any suggestions would be deeply appreciated. Thanks in advance.

DAMN! 15 minutes after posting a $50$ point bounty, I discovered the answer to my problem. I had assumed that there were only $N$ zeroes of $F(s)$. It turns out that there is an isolated pole far from the asymptote to $+\infty$ at $s=-\lambda_N^2$ as indicated in the graphic below (note that the abscissa is $-s$). Once I incorporated this additional pole, the solution worked out perfectly.
I still have a question however:
At the initial condition, the solution is
$$ \theta(0)=p+\sum_{j=1}^{N+1} \Big(\frac{\bar z_j \theta_0 +Ap}{\bar z_j F'(\bar z_j)}\Bigr) =p\Big(1+A\sum_{j=1}^{N+1}\frac{1}{\bar z_j F'(\bar z_j)}\Bigr) +\theta_0\sum_{j=1}^{N+1}\frac{1 }{F'(\bar z_j)} $$
which would imply that the following conditions have to hold: $$ 1+A\sum_{j=1}^{N+1}\frac{1}{\bar z_j F'(\bar z_j)}=0 $$
and $$ \sum_{j=1}^{N+1}\frac{1 }{F'(\bar z_j)}=1 $$
and while I can verify numerically that they do, I'm unable to prove analytically that this is invariably the case. Any pointers?
Thanks.