I am trying to evaluate the Hammersley–Chapman–Robbins bound for the variance of an unbiased estimate $\hat{\alpha}$ of $\alpha$ (for a given $\sigma$) for the Rice distribution:
$$p(x|\alpha,\sigma) = \frac{x}{\sigma^2}\exp\left(-\frac{x^2+\alpha^2}{2\sigma^2}\right)I_0\left(\frac{x\alpha}{\sigma^2}\right).$$
I know that:
$$\mathrm{Var}(\hat{\alpha}) \ge \sup_\Delta \frac{\Delta^2}{E_{\alpha} \left[ \tfrac{p(x;\alpha+\Delta)}{p(x;\alpha)} - 1 \right]^2}= \sup_\Delta
\frac{\Delta^2}{\int_0^\infty\frac{p(x|\alpha+\Delta)^2}{p(x|\alpha)}\mathrm d\alpha-1}.$$
I need to find $\mathrm{Var}(\hat{\alpha})$ and $\Delta$ at which it is attained.
I suppose that in my case there is no analytic solution, so I perform a numeric evaluation (may be I am wrong, by I see no way to solve it).
It is obvious that the extremum should be somewhere around (or equal to) $\Delta=-\alpha$. But the computation (I am using Wolfram Mathematica) just blows up because of very small denominator. I have tried various methods that Mathematica proposes with different accuracy, precision etc. but still I'm getting very sad results (low accuracy, lots of errors etc.) with an ungracefully huge time lag and effort. So my questions are
- Are there any ways to evaluate this bound in a simpler way or any methods to perform fast and efficient extremum search: I have tried FindMaximum and NMaximize after performing numerical integration.
- The parameter $\alpha\geq 0$. While searching for an extremum should I use unconstrained optimization methods and serach for $-\infty \leq \Delta\leq \infty$ or for $\Delta\geq 0 $? I suppose first should be correct, but the trivial answer is again $\Delta=-\alpha$ which yields the Cramer-Rao bound (which is not interesting to me).