for arbitrary matrix $\mathbf A\in \mathbb R^{m \times n}$ and $rank(\mathbf A) = r$, solve the least squares: $$\min \|\mathbf Ax - b\|_2. $$ According to SVD, pseudo inverse of $\mathbf A$ is $$\mathbf A^{\dagger}=\mathbf{V} \mathbf \Sigma^{\dagger} {\mathbf U^T}$$ where $$\mathbf \Sigma^{\dagger}= \begin{bmatrix} \mathbf \Sigma_{r \times r}^{-1} & \mathbf 0 \\ \mathbf 0 & \mathbf 0 \end{bmatrix}_{n \times m}.$$ I get $x = \mathbf A^{\dagger}b$ is the canonical solution $(A^TA)^{-1}A^Tb$ for linear least squares when $m=n=r$. But don't know why it is the solution that minimizes $|x|$.
2026-03-25 11:06:31.1774436791
why $x = \mathbf A^{\dagger}b$ is the one that minimizes $|x|$ among all mimizers of $|\mathbf Ax - b|$
409 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in LEAST-SQUARES
- Is the calculated solution, if it exists, unique?
- Statistics - regression, calculating variance
- Dealing with a large Kronecker product in Matlab
- How does the probabilistic interpretation of least squares for linear regression works?
- Optimizing a cost function - Matrix
- Given matrix $Q$ and vector $s$, find a vector $w$ that minimizes $\| Qw-s \|^2$
- Defects of Least square regression in some textbooks
- What is the essence of Least Square Regression?
- Alternative to finite differences for numerical computation of the Hessian of noisy function
- Covariance of least squares parameter?
Related Questions in SVD
- Singular values by QR decomposition
- Are reduced SVD and truncated SVD the same thing?
- Clarification on the SVD of a complex matrix
- Sufficient/necessary condition for submatrix determinant (minor) that decreases with size?
- Intuitive explanation of the singular values
- SVD of matrix plus diagonal matrix and inversed
- Fitting a sum of exponentials using SVD
- Solution to least squares problem
- Are all three matrices in Singular Value Decomposition orthornormal?
- Solving linear system to find weights in $[0,1]$
Related Questions in PSEUDOINVERSE
- matrix pseudoinverse with additional term
- Connection between singular values, condition and well-posedness
- Sherman-Morrison formula for non-invertible bmatrices
- How to Find Moore Penrose Inverse
- Least squares partial derivatives to matrix form
- Inequality between inverses of positive (semi)definite matrices
- Solve $Ax=b$ for $A$
- Derivative of Frobenius norm of pseudo inverse with respect to original matrix
- For all $x,y\in\mathbb R^n$, $(xy^T)^+=(x^Tx)^+(y^Ty)^+yx^T$
- Need to do an opposite operation to a dot product with non square matrices, cannot figure out how.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
The column space of $A$ is the same as the span of the first $r$ columns of $U$; let $U_r$ be this $m \times r$ matrix. So the projection of $b$ onto the column space of $A$ is $\hat{b} := U_r (U_r^\top U_r)^{-1} U_r^\top b = U_r U_r^\top b$.
If $x$ is a solution to the optimization problem, then $Ax = \hat{b}$.
Thus, we can consider a second optimization problem: minimize $\|x\|_2$ subject to $Ax = \hat{b}$.
First, we check that $x := A^\dagger b$ is feasible, i.e. $A A^\dagger b = \hat{b}$. $$AA^\dagger b = U \Sigma V^\top V \Sigma^\dagger U^\top = U \begin{bmatrix} I_{r \times r} \\ & 0_{m-r \times m-r}\end{bmatrix} U^\top b = U_r U_r^\top b = \hat{b}.$$
Next we justify that it is minimum norm. Note that all other feasible $x$ can be written as $x = A^\dagger b + z$ for some $z$ in the nullspace of $A$. Note that the nullspace of $A$ is the same as the span of the last $n - r$ columns of $V$. On the other hand, $A^\dagger b = V\Sigma^\dagger U b$ lies in the span of the first $r$ columns of $V$. Thus $A^\dagger b$ and $z$ are orthogonal and we have $$\|x\|_2^2 = \|A^\dagger b\|_2^2 + \|z\|_2^2.$$ Thus choosing $z = 0$ minimizes $\|x\|_2$, so $A^\dagger b$ is the minimum norm solution.