Cramer-Rao bound for a parameter that can take only a finite set of values

597 Views Asked by At

My question is related to the bound on the variance of an estimation of a parameter that can take only a finite set of values.

I copy and paste what is written on wikipedia to have a common starting point and give a first example related to a parameter belonging to a continuous set.

Suppose $\theta$ is an unknown deterministic parameter which is to be estimated from measurements x, distributed according to some probability density function $f(x;\theta)$.

The variance of any unbiased estimator $\hat{\theta}$ of $\theta$ is then bounded by the reciprocal of the Fisher information $I(\theta)$:

$\mathrm{var}(\hat{\theta}) \geq \frac{1}{I(\theta)}$

$I(\theta) = \mathrm{E} \left[ \left( \frac{\partial \ell(x;\theta)}{\partial\theta} \right)^2 \right] = -\mathrm{E}\left[ \frac{\partial^2 \ell(x;\theta)}{\partial\theta^2} \right]$

and $\ell(x;\theta)=\log (f(x;\theta))$ is the natural logarithm of the likelihood function and $\mathrm{E}$ denotes the expected value (over x).

As a simple case we can consider the estimation of a constant $\theta \in \mathbf{R}$ in an AWGN system (Additive White Gaussian Noise).

We measure $x=\theta+ \nu$ where $\theta$ is our constant and $\nu$ is a gaussian random variable with zero mean and variance $\sigma^2_{\nu}$

It is a known result that given a set of $N$ measurements $x[n], n=[0,...N-1]$ in a situation that can be modelled as

$x[n]=s[n;\theta]+\nu[n]$

where $\nu[n]$ is a set of i.i.d gaussian random variable $\sim N(0,\sigma^2_{\nu})$ the variance of the estimator is

$\mathrm{var}(\hat{\theta}) \geq \sigma^2_{\nu}(\sum_{n=0}^{N-1}(\frac{\partial s[n;\theta]}{\partial \theta})^2)^{-1}$

In this simple case the output of our ML estimator is simply $\hat{\theta}=x$ and the variance of the estimation is $\sigma^2_{\nu}$

Now this is a simple case and the calculation of the derivative it's easy and we have only one measurement (N=1)

$s[n,\theta]=\theta \rightarrow \frac{\partial s[n;\theta]}{\partial \theta}=1$

I finally concluded my introduction,sorry if it was boring.

Now i introduce my question: Suppose that you have a parameter that can take only a finite set of values. For example only two values (-1,+1);

$\theta \in$ {-1;+1}

If i'd ask you to create an estimator for this parameter it would simply be a threshold detector:

$x>0 \rightarrow \hat{\theta}=+1$

$x<0 \rightarrow \hat{\theta}=-1$

We can also evaluate the variance of this estimator.

But what about a general theoretical minimum variance for an estimator of a parameter that can take only a finite set of values?

In this case the concept of derivative does not hold since it does not exist: $\frac{\partial s[n;\theta]}{\partial \theta}$ cannot be calculated as the limit of the variation of the function $x[n;\theta]$ as the increment of $\theta$ goes to zero.

Is it something known in literature? Do i have to switch to a different kind of calculus such as quantum calculus?

Thank in advance to everybody for your precious help.