I am trying to find the zeros of the Bessel polynomials numerically.
I found an article by Shafique Ahmed on the zeros of Bessel polynomials --- where Bessel polynomials are related but not the same as Bessel functions;
$$y_n(x) = \sum\limits_{k=0}^n\frac{(n+k)!}{(n-k)!k!}\left(x \over 2\right)^k$$
and (with coefficients in the opposite order)
$$\theta_n(x) = x^ny_n(1/x)=\sum\limits_{k=0}^n\frac{(n+k)!}{(n-k)!k!}\frac{x^{n-k}}{2^k}$$
The article by Ahmed explores recurrence relations involving sums of reciprocal differences of the zeros $y_j$ of $y_n(y_j) = 0$, and $z_j$ of $\theta_n(z_j)=0$, including:
$$\frac{1}{y_j}+\frac{1}{y_j^2} = \sum\limits_{k\neq j} \frac{1}{y_k - y_j}$$
$$-1-\frac{n}{z_j} = \sum\limits_{k\neq j} \frac{1}{z_k - z_j}$$
I found another article by Campos and Calderón on modified Bessel polynomials of similar form:
$$y_n(z;a) = \sum\limits_{k=0}^n\frac{n!(n+a-1)_k}{(n-k)!k!}\left(z \over 2\right)^k$$
with $(x)_n$ being the Pochhammer symbol but Campos and Calderón don't say whether it's the rising or falling version. In any case, they give a similar recurrence relation for the zeros of these polynomials:
$$\sum\limits_{k\neq j}\frac{1}{z_j - z_k} + \frac{az_j + 2}{2z_j^2} = 0$$
and they go on to say:
This set of nonlinear equations can be solved by standard methods. We have used a Newton method to solve them up to $n=500$ and $a=100$.
What does this mean? How can I reproduce this on my own? I know about Newton's method but don't see how it applies here. I'm intrigued because using the standard numpy.roots() function it starts to fail for the zeros of $\theta_n(x)$ at around n=18, putting some of them in the right half-plane even though they should all be in the left half-plane, because of the ill-conditioned problem of finding roots of large degree polynomials with large coefficients.
Okay, well, I muddled my way to a solution using the multidimensional Newton's method for $\theta_n(z)=0$, but I'm not sure why it seems to converge so well (tried it for $n=1000$ and it converged in less than 10 iterations, just took a while to do the 1000x1000 matrix math).
If $\mathbf z$ is a vector of $n$ complex numbers close to the roots, then we can compute $\mathbf F(\mathbf z)$, where the $j$th component of $\mathbf F$ is
$$F_j(\mathbf z) = 1+\frac{n}{z_j} + \sum\limits_{k\neq j} \frac{1}{z_k - z_j}$$
Then we are looking for one of the $n!$ solutions to $\mathbf F(\mathbf z) = \mathbf 0$.
I found an empirical approximation of $z_k \approx \frac{2n}{3}(u_k^2-1) + jnu_k$, where $u_k = \frac{2k - n + 1}{n+1}$ for $k \in [0,1,\ldots n-1]$. (I'm an electrical engineer so I use $j=\sqrt{1}$ )
If I set $\mathbf z[0]$ to this, then one step of Newton's method is:
In Python with numpy, I can implement this with
There's probably a faster way to implement this with vectorization rather than using explicit for-loops, but it works rather well; here's an example for $n=200$:
which prints and plots: