I have multiple measurements for the same property but with different but known uncertainty (variance). And I would like to combine that measurements in a way that I get as close to the real value as possible (minimizing variance). We can assume the error to be Gaussian distributed and to be unbiased.
Example situation: I've got 3 sensors of the same type. Each of them measures the same property (e.g. rotation in same axis). This results in vector x = { x1, x2, x3 }. But each of them has another known, static variance (e.g. v = { v1, v2, v3}). Now I would like to combine the measurements in a way, that minimizes variance (mean square error).
I assume that I can just multiply x by a matrix F, which gives me the optimal estimate. That means it minimizes: (v F)².
But, I don't know how to calculate matrix F. How to do that?
Stefan
--- EDIT ---
The measurements are of course correlated (as they measure the same property)! And I can also calculate a covariance matrix...
Since you speak only of the three individual variances and not of the covariances of the measurements, I'll assume that you're implicitly assuming that the measurements are independent.
The variance of the combined estimator $\sum_iw_ix_i$ with $\sum_iw_i=1$ is $\sum_iw_i^2v_i$. You can optimize the weights $w_i$ using a Lagrange multiplier; the objective function is $\sum_iw_i^2v_i-\lambda\sum_iw_i$, and setting the derivative with respect to $w_j$ to zero yields $2w_jv_j=\lambda$. Thus, the optimal weights are inversely proportional to the variances, and normalization leads to
$$w_j=\frac{v_j^{-1}}{\sum_iv_i^{-1}}\;.$$
[Edit in response to comment:]
If the covariances are known and non-zero, the variance of a linear combination $w^\top x$ is $w^\top\Sigma w$, with $\Sigma$ the covariance matrix. Then the same Lagrange multiplier approach of minimizing the objective function $w^\top\Sigma w-\lambda\sum_iw_i$ leads to $2\Sigma w=\lambda e$, where $e$ is the vector with all entries $1$. Generally $\Sigma$ will be invertible, so this yields $w=\lambda\Sigma^{-1}e/2$, and normalization yields
$$ w=\frac{\Sigma^{-1}e}{e\Sigma^{-1}e}\;. $$