When a multi-parameter estimation problem can be considered as a single-parameter estimation problem?

127 Views Asked by At

Suppose we have a simple neural network described by a perceptron. The figure below can be seen as a schematic representation of this model.

Perceptron

Basically, each input $x_i$ gets multiplied by a weight factor $w_i$ and the weighted sum of the inputs is sent to the activation function $f(x)$ which produces the output $y$. I want to determine the estimation errors of the weights of the neural network using the probability distribution of the output of the activation function. Naturally, I have to use multivariate parameter estimation paradigm and Fisher information matrix to find the bounds on the errors of the estimators using Cramér–Rao bound.

The first question is, if we assume that all the weights ($w_i$) are equal, is it justified to use single-parameter estimation estimation theory and say that the bound is given by the reciprocal of the Fisher information for that single parameter? Basically, if the $w_i$ are identical at all times, does it mean that they are the same random variable even if they correspond to different nodes of the network with different inputs?

The second question is, is there a condition at all that if it is satisfied we are allowed to reduce the multi-parameter estimation problem to a single-parameter one in general (not only for this problem)?

I must add that the off-diagonal entries of the Fisher information matrix (FIM) that I have calculated when treating each $w_i$ as a distinct random variable are non-zero. Having non-zero off-diagonals in the FIM in general affects the error bounds in multivariable parameter estimation. Is this calculation justified or should I stick with the single parameter Fisher information because I actually have only one random variable?