This question is somewhat connected to a previous one I posted two months ago concerning solving linear, over-determined systems of equations of the form $Ax = b$, where $A$ is a matrix of coefficients of dimension $m \times k$, $(m > k)$, $x$ is the vector of unknowns of dimension $k \times 1$ and, consequently, $b$ is a vector of constants of dimension $m \times 1$.
If I add now a certain amount of noise n modelled by a uniform random variable to b we have now a different vector of unknowns $b′=b+n$. This would transform the previous system into a different one of the form $Ax' = b'$, with $x' \neq x$.
We can solve such system by applying least squares minimization into it. However, I am looking for a way to reduce the difference $\epsilon = x' - x$ between both vectors. I wonder whether increasing the number of equations (i.e., dimension $x$ would help reduce such quantity at least up to a certain bound.
Another possible approach I came up with could consist on extracting $k$ linear independent rows from $A$ -hence transforming it to a square matrix $A'$- and applying least-squares minimization directly over the system $A'x' = b'$. Then, I would repeat such process a given number of times -say $M$- and promediate the results to obtain a reduced value for $\epsilon$. Should exist any advantage in increasing the number of equations to improve the results of least-squares minimization, could this method be better in any case (even if we do try it back again, this time using $Nk$ rows with $N > 1$) than performing such minimization using the whole matrix $A$?
Many thanks in advance!