The $\alpha$ estimation for the model $x_i = \xi_i \cdot \alpha$

108 Views Asked by At

We have $n$ sensors $X_i$ which estimate the scalar value $\alpha$ with different relative accuracies $\delta_i \ll 1$: $$ x_i = X_i(\alpha) = \xi_i \cdot \alpha, \ \ \ \xi_i \sim N(1, \delta_i) $$

How to find the best estimate of $\alpha$?

1

There are 1 best solutions below

17
On BEST ANSWER

We have to rewrite it in more standard form. Let's take noise $\epsilon_i=(\zeta_i-1)/\delta_i$ as a function of $x_i,\alpha,\delta_i$: $$ \epsilon_i=(x_i/\alpha-1)/\delta_i $$ Clearly $\epsilon_i=N(0,1)$. Now you can run maximum likelihood or (that the same here least square) principle. $$ \min_\alpha F(\alpha),~ where~ F(\alpha)=\sum_i \epsilon_i^2 $$ It is simpler with respect to $a=1/\alpha$. $$ F(a)=\sum_i (x_i a-1)^2/\delta_i^2 $$ Analytically : $$ F^\prime_a=\sum_i 2(x_ia-1)/\delta_i^2x_i=0 $$ So $$ a=\frac{\sum_i x_i/\delta_i^2}{\sum_i x_i^2/\delta_i^2} $$ and finally $$ \alpha=1/a=\frac{\sum_i x_i^2/\delta^2_i}{\sum_i x_i/\delta_i^2} $$

UPDATE VARIANT 2

Due to very intensive discussion about my solution I decided to update and may be fix it. So let's again rewrite the model in more standard form: $$ x_i=\alpha+\delta_i \alpha\epsilon_i, ~ where ~\epsilon_i \in N(0,1) $$ Let's write a -log-likelihood function in terms of $x_i,\delta_i,a$ $$ L(x,\alpha,\delta)=-0.5\sum_{i=1}^n \frac{(x_i-\alpha)^2}{\delta_i^2 \alpha^2}-0.5 \sum_{i=1}^n \log(\alpha \delta_i) $$ The main difference here with the initial approach is $\log(\alpha)$. The difference that in my transformation since I deformed original $x_i$ by unknown $\alpha$ which deforms maximum likelihood and we lose this $\log$. I am not sure which approach is more correct yet but this will give us a different result. Indeed: Now maximizing $L$ with respect to $\alpha$ $(a=1/\alpha)$ leads to the following quadratic equation: $$ 2\sum_{i=1}^n \frac{x_i^2 a^2 -x_i a}{\delta_i^2}-n=0 $$ Lets' denote $$ \bar x=\sum_{i=1}^n \frac{x_i}{n\delta_i^2}\\ \bar {x^2}=\sum_{i=1}^n \frac{x_i^2}{n\delta_i^2} $$

So $$ 2 \bar {x^2} a^2-2\bar x a -1=0; $$
The equation has two roorts but the root corresponding to the maximum is: $$ a= \frac{\bar x+\sqrt{(\bar x)^2+2\bar {x^2}}}{2\bar{x^2}} $$ and finally $$ \alpha=1/a=\frac{2\bar{x^2}}{\bar x+\sqrt{(\bar x)^2+2\bar {x^2}}} $$ The result seems to be even more complicating and less obvious than the original which also makes sense. The analysis of bias.mean and variance will follow.

UPDATE VARIANT 3

Analysing the problem I got to the conclusion that the error in the measurements cannot depend on estimated $\alpha$. Instead it depends on "true $\alpha$". This leads to the simpler equation $$ x_i=\alpha+\delta_i\alpha^*\epsilon_i $$ Where $\alpha^*$ is the true $\alpha$. But then the solution becomes very simple and natural. We have to minimize $$ \sum_i (x_i-\alpha)^2/\delta_i^2 $$ that leads to the immediate nice solution pointed out by Did $$ \alpha=\frac{\sum_i x_i /\delta^2_i}{\sum_i 1/\delta_i^2} $$

Means and Variances of the estimators

Since $x_i=N(\alpha,\alpha\delta_i)$, $E(x_i)=\alpha^*,E(x_i^2)=\alpha^2(1+\delta_i^2)$. Higher monents become more complicating and the ratio of final means and variances will be disaster. I only can say that they will be biased even asymptotically.The variance ot the variant 1 and 2 will converge to zero as $1/n$ just because variance $1/n nominator$ and variance of $1/n denominator$ will converge to zero. Thus variance of the ratio will converge to zero too.

However for variant 3 everything is easy and straightforward. $$ E(x_i)=\alpha^*, Var(x_i)={\alpha^*}^2*\delta_i^2 $$

$$ E(\alpha)=\alpha^*\frac{\sum_i 1/\delta_i^2}{\sum_i 1/\delta_i^2}=\alpha^* $$ Now variance of sum equal sum of variances, so $$ Var(\alpha)={\alpha^*}^2\frac{\sum_i 1/\delta_i^2}{(\sum_i 1/\delta_i^2)^2}=\frac{{\alpha^*}^2}{\sum_i 1/\delta_i^2} $$