I'm working on a homemade code that needs an implementation of spherical Bessel function of the first kind myself. The order is restricted to non-negative integers, and only real argument is required. I did a little research and found that a combination of power series and forward recurrence might do the job:
$$j_{l}(x) = \frac{2l-1}{x}j_{l-1}(x) - j_{l-2}(x)$$ $$j_l(x) = x^l\sum_{k=0}^{+\infty} \frac{(-x^2/2)^k}{k!(2l+2k+1)!!}$$
Roughly speaking, the recurrence relation has poor numerical stability for $x\ll l$, where the power series is very efficient. I did some numerical benchmark tests and found that using a transition point of the two methods at $x \sim 0.75 l$ works, but I'm not able to give a proof or analysis on this finding.
I'm wondering, for the recurrence relation, can we know at what $x$ does numerical instability occur?
The recurrence relation is the same for the regular and irregular solutions $j_n$ and $y_n$. Which one you will get, or which linear combination of them, is determined by the "seed" of two first values (presumably for $n=0$ and $n=1$) that start your recursion. In your case you of course will want to use $j_0(x)$ and $j_1(x)$ as the seed values. Finite precision means that you always have a small admixture of the unwanted solution, but initially this is a small error. Numerical instability is defined as the situation where the unwanted solution grows faster than the wanted solution, thereby amplifying the error while the recursion continues to higher $n$. Of course along the way more small errors are introduced in each step by the finite-precision arithmetic, and we have to look at the growth of all those errors.
We can do this by observing that at each point in the recursion, the two last values are the only information we keep. We can write them as a vector: $$ (f_{n-1}, f_n) = \alpha (j_{n-1}, j_n) + \beta (y_{n-1}, y_n), $$ where we want $\alpha$ to be 1 and $\beta$ to be 0. Now just look at one error introduced somewhere along the way. It changes $\alpha$ and $\beta$, they will no longer be 1 and 0, but with only one error introduced they will then stay what they are until the end of the recursion process. The only way this error can grow is then determined by the growth of the ratio between $y_n$ and $j_n$, which is indeed growing quite fast for large $n$. Similar expansions as the ones you already showed will give you that for $n \rightarrow \infty$ we get: $$ \frac{y_n}{j_n} \rightarrow \frac{(2n+1)!! \ (2n-1)!!}{x^{2n+1}} $$ This will start amplifying the error if $n > x/2$. So a small admixture of $y_n$ at some early stage can be a large error in the end even though it is still the same amount of admixture of $y_n$, simply because the $y_n$ become so big for large $n$. Note that for $n < x/2$ the errors are attenuated, so there the recursion is well-behaved.
PS: this doesn't mean you have to abondon the recursion exactly when $n>x/2$, you can admit some growth of the errors past that point, especially because the recursion is so well-behaved below that value. Besides, the alternative (the Taylor expansion) is also not error-free, so the choice based on your benchmark may be fine, if the errors are still acceptable.