Let $\textbf{x} \in R^n$ be the parameter we want to find and $\textbf{b} \in R^m$ is the observation.
$$\textbf{Ax=b}$$
Sometimes observations (and $\textbf{A} \in R^{m\times n}$) are not enough to estimate $\textbf{x}$ or estimated parameters are less accurate. What is the metric to define the goodness of the observation?
Added:
Sorry for the missing details!
I am assuming the system is overdetermined. $m \gg n$
$(\textbf{A}^T\textbf{A})^{-1}$ might exist but I am assuming that it could be nearly singular in which case the some of the estimated parameter $\textbf{x}$ is unstable. I am trying to detect which state parameter $\textbf{x}$ is less observed.
To give you more details, I have different methods that are used to create $\textbf{A}, \textbf{b}$. Some of the methods do not give enough observations to fully estimate $\textbf{x}$. As I have ground truth, I can say which one is the best but I would like to explain why some of they are not good by showing additional metric. For example
$$f(\textbf{A},\textbf{b},method_A)=\begin{bmatrix} 1\\ 100\\ 50 \end{bmatrix}which\ means=\begin{bmatrix} near\ singular\\ good\\ moderate \end{bmatrix} $$
$$f(\textbf{A},\textbf{b},method_B)=\begin{bmatrix} 100\\ 100\\ 50 \end{bmatrix}which\ means=\begin{bmatrix} good\\ good\\ moderate \end{bmatrix} $$ where $f() \in R^n$. In this example, method B is better.
I guess the title is confusing readers. Any recommendation for the title?
Added2:
\begin{equation} \begin{bmatrix} (\textbf{I}- {^L\textbf{R}_i})& \textbf{R} ^C\textbf{t}_i \end{bmatrix} \begin{bmatrix} \textbf{t} \\ \lambda \end{bmatrix} = {^L\textbf{t}}_i \end{equation}
Observations: $\textbf{R}, {^L\textbf{R}_i}\in SO3$, ${^L\textbf{t}_i},{^C\textbf{t}_i}\in R^3$,
To be estimated: $ \textbf{t}\in R^3$, $\lambda \in R$. It is a displacement and scale estimation problem.
The observations ${^L\textbf{R}_i},{^L\textbf{t}_i},{^C\textbf{t}_i}$ are all noise currupted. Thus, if the observations are not spreaded enough, the estimation $ \textbf{t}\in R^3$ will be inaccurate.
I assume you are referring to the least squares problem for $Ax=b$ when $A \in \mathbb{R}^{m \times n},m>n$ and $A$ has rank $r<n$. In this case the least squares problem i.e. $\arg \min_x \| Ax-b \|$ does not have a unique solution; given one solution $x_0$ and any $n \in \mathrm{Ker}(A)$, $x_0+n$ is another solution.
Most commonly people define the solution in this case to be the solution of minimal Euclidean norm. Conveniently, this is also the solution that is provided by $(A^T A)^{\dagger} A^T b$, where $\dagger$ denotes the Moore-Penrose pseudoinverse. This can be written in terms of the SVD as $V (\Sigma^T \Sigma)^\dagger \Sigma^T U^T b$, where the Moore-Penrose pseudoinverse of a diagonal matrix is obtained by taking the reciprocal of each nonzero diagonal entry and preserving all zeros. Thus $(\Sigma^T \Sigma)^\dagger \Sigma^T$ is a $n \times m$ matrix whose entries are the reciprocals of the nonzero singular values of $A$ (on the diagonal) or zero.