I have the following linear problem in a matrix form $Ax=Y$, where $A$ is a coefficient matrix, $x$ is a vector of unknown parameters and $Y$ is a vector of observed data. Matrix $A$ is usually over-determined and often rank deficient; it has 1000s of rows and columns. I find a solution by applying the pseudo-inverse (SVD): $x=A^{-1}Y$. I compute a conditioning number (equal about 5) to learn how the input errors propagate to the output errors. Can I determine by looking at matrix $A$ if each unknown parameter in vector $x$ is resolved well and if there is an unwanted correlation between some $x_{i}$ and $x_{j}$. I have done tests using some synthetic data and they are good, but here I would like to know if anything can be determined from the matrix $A$ alone.
2026-03-26 12:34:52.1774528492
Error propagation in a simple linear model (asked by a non math-major researcher).
93 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in SVD
- Singular values by QR decomposition
- Are reduced SVD and truncated SVD the same thing?
- Clarification on the SVD of a complex matrix
- Sufficient/necessary condition for submatrix determinant (minor) that decreases with size?
- Intuitive explanation of the singular values
- SVD of matrix plus diagonal matrix and inversed
- Fitting a sum of exponentials using SVD
- Solution to least squares problem
- Are all three matrices in Singular Value Decomposition orthornormal?
- Solving linear system to find weights in $[0,1]$
Related Questions in PSEUDOINVERSE
- matrix pseudoinverse with additional term
- Connection between singular values, condition and well-posedness
- Sherman-Morrison formula for non-invertible bmatrices
- How to Find Moore Penrose Inverse
- Least squares partial derivatives to matrix form
- Inequality between inverses of positive (semi)definite matrices
- Solve $Ax=b$ for $A$
- Derivative of Frobenius norm of pseudo inverse with respect to original matrix
- For all $x,y\in\mathbb R^n$, $(xy^T)^+=(x^Tx)^+(y^Ty)^+yx^T$
- Need to do an opposite operation to a dot product with non square matrices, cannot figure out how.
Related Questions in INVERSE-PROBLEMS
- does asymptotic behavior guarantee the uniqueness?
- Inverse Eigenvalue problem for star graph
- solve $XA = B$ in MATLAB
- Find coefficient that makes constraint non-binding
- Can you hear the pins fall from bowling game scores?
- Bijective compact operator between $X$ and $Y$ with $\text{dim}(X)=\infty$?
- Solving minimization problem $L_2$ IRLS (Iteration derivation)
- Solving Deconvolution using Conjugate Gradient
- How to utilize the right-hand side in inverse problems
- Proof in inverse scattering theory (regularization schemes)
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
The covariance has the following transformation property \begin{equation} Cov[A\vec{x}] = A Cov[\vec{x}]A^{\intercal}, \end{equation} where $Cov[.]$ is a covariance matrix. This property can be proved as follows \begin{equation} Cov[A\vec{x}] = E[(A\vec{x} - E[A\vec{x}])(A\vec{x} - E[A\vec{x}])] \\ = E[A(\vec{x} - E[\vec{x}])(\vec{x} - E[\vec{x}])A^{\intercal}] \\ = A E[(\vec{x} - E[\vec{x}])(\vec{x} - E[\vec{x}])]A^{\intercal} \\ = A Cov[\vec{x}]A^{\intercal}, \end{equation} where $E[.]$ is the expectation. I.e. in your case \begin{equation} A Cov[\vec{x}]A^{\intercal} = Cov[\vec{y}], \end{equation} so the covariance matrix for $\vec{x}$ has the following form \begin{equation} Cov[\vec{x}] = A^{+}Cov[\vec{y}](A^{\intercal})^{+}, \end{equation} where $A^{+}$ is pseudoinverse of $A$ Sometimes even better to write the answer in terms of inverse covariance matrices: \begin{equation} Cov^{-1}[\vec{x}] = A^{\intercal}Cov^{-1}[\vec{y}]A. \end{equation}
There is also well-known inequality that links relative perturbations of the solution $\vec{x}$ and relative perturbations of the right hand side $\vec{y}$: \begin{equation} \frac{\|\Delta\vec{x}\|}{\|\vec{x}\|} \leq \frac{cond(A)}{1 - cond(A)\frac{\|\Delta A\|}{\|A\|}}\left(\frac{\|\Delta\vec{y}\|}{\|\vec{y}\|} + \frac{\|\Delta A\|}{\|A\|}\right), \end{equation} where $\Delta\vec{y}$ is the perturbation of the vector $\vec{y}$, $\Delta\vec{x}$ is the perturbation of $\vec{x}$ and $\Delta A$ is the perturbation of the matrix $A$. If the matrix perturbation is zero, the inequality has a simpler form \begin{equation} \frac{\|\Delta\vec{x}\|}{\|\vec{x}\|} \leq cond(A)\frac{\|\Delta\vec{y}\|}{\|\vec{y}\|}. \end{equation}
The last two inequalities can be simply obtained using the definition of the condition number $cond(A)=\|A^{+}\|\|A\|$.