Assume we have the following matrix equation:
$$ \hat{Y} - Y = \Delta Y = \frac{\partial Y}{\partial X}\Delta X + e $$
where,
$Y$ is our observed data, $\hat{Y}$ are the predictions of a given model (it is a very specific model, and I would like to avoid getting into details), $\Delta Y$ is n-dimensional vector and represents the errors of this model, $\frac{\partial Y}{\partial X}$ is (n,m) dimensional and represents the partial derivative (these are already obtained numerically, by linear shocks upward and downward), $\Delta X$ is m-dimensional vector, and $e$ is some error which is omitted due to the lack of the other partials, error coming from the fitting procedure, misspecification of parameters (in the model aforementioned), etc. Note that $Y$ and $\frac{\partial Y}{\partial X}$ are known and we solve for $\Delta X$. The idea is that we would like to find a correction factor for our model.
Final goal is to construct $\hat{X} = (1+\Delta X)X$. One way to solve for $\Delta$X is to compute Moore-Penrose pseudo-inverse matrix (using e.g. scipy.linalg.pinv2). The same solution can be obtained by solving in a least squares fashion (statsmodels.regression.linear_model.OLS, e.g.).
My question: what are other possibilities of solving for $\Delta X$ in this example and is it possible to somehow put more weight on $\Delta X_{i}$?