From the paper for Generalized low-rank models by Stephen Boyd, this Frobenius loss function has been used using SVD. Can someone explain it to me the following equation? Is U inverse is equal to U transpose here or what has been done here as A = U sigma V transpose? And if we make it like sigma = U inverse A V inverse or what's going on. also why it is equal to sigma - U trans XY V? Please guide me. Thanks
2026-02-22 22:52:16.1771800736
How SVD for the frobenius norm has been calculated?
1.7k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in MATRIX-DECOMPOSITION
- Real eigenvalues of a non-symmetric matrix $A$ ?.
- Swapping row $n$ with row $m$ by using permutation matrix
- Block diagonalizing a Hermitian matrix
- $A \in M_n$ is reducible if and only if there is a permutation $i_1, ... , i_n$ of $1,... , n$
- Simplify $x^TA(AA^T+I)^{-1}A^Tx$
- Diagonalize real symmetric matrix
- How to solve for $L$ in $X = LL^T$?
- Q of the QR decomposition is an upper Hessenberg matrix
- Question involving orthogonal matrix and congruent matrices $P^{t}AP=I$
- Singular values by QR decomposition
Related Questions in LEAST-SQUARES
- Is the calculated solution, if it exists, unique?
- Statistics - regression, calculating variance
- Dealing with a large Kronecker product in Matlab
- How does the probabilistic interpretation of least squares for linear regression works?
- Optimizing a cost function - Matrix
- Given matrix $Q$ and vector $s$, find a vector $w$ that minimizes $\| Qw-s \|^2$
- Defects of Least square regression in some textbooks
- What is the essence of Least Square Regression?
- Alternative to finite differences for numerical computation of the Hessian of noisy function
- Covariance of least squares parameter?
Related Questions in SINGULAR-VALUES
- Singular Values of a rectangular matrix
- Connection between singular values, condition and well-posedness
- Does the product of singular values of a rectangular matrix have a simple expression?
- Clarification on the SVD of a complex matrix
- Intuitive explanation of the singular values
- What are the characteristics that we can use to identify polynomials that have singular points?
- Zolotarev number and commuting matrices
- Spectral norm of block and square matrices
- Why is the Schmidt decomposition of an operator not unique?
- Smallest singular value of full column rank matrix
Related Questions in LU-DECOMPOSITION
- inverting a matrix using the LU decomposition approach
- LU Factorization. Finding L and A given y, b and U
- Why WolframAlpha does LU decomposition with pivoting even when it isn't needed?
- Cost of LU decomposition of a Symmetric Matrix
- LU/LUP decomposition: can some of U's diagonal elements be zero?
- Cost of LU decomposition (time cost)
- Why (which advantages) we use different matrix factorization algorithms?
- What is the computation time of LU-, Cholesky and QR-decomposition?
- How SVD for the frobenius norm has been calculated?
- In QR and LU factorizations what would the results be with transposed inputs?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?

There is a theorem called the Eckart-Young-Mirsky Theorem which states the following.
$$A = U \Sigma V^{T} $$ $$ A_{k} = \sum_{i=1}^{k} \sigma_{i} u_{i} v_{i}^{T} $$ $$ \| A - A_{k} \|_{2} = \| \sum_{i=k+1}^{n} \sigma_{i} u_{i} v_{i}^{T} \| = \sigma_{k+1} $$
Note
$$ \| A - A_{k} \|_{F}^{2} = \| \sum_{i=k+1}^{n} \sigma_{i} u_{i} v_{i}^{T} \|_{F}^{2} = \sum_{i=k+1}^{n} \sigma_{i}^{2} $$
Now...utilizing your stuff
$$ \| A - XY \|_{F}^{2} = \| \Sigma - U^{T} XY V \|_{F}^{2} $$ From above we have seem that because of the unitary invariance under the 2 norm that $U,V$ go away and we're left with simply the singular value matrix when approximating $A$ so that is why we have $\Sigma$ there.
If you read it define $X,Y$ neatly within the Eckart Mirsky theorem to be a low-rank approximation. We end up with
$$X = U_{k}\Sigma_{k}^{\frac{1}{2}} Y = \Sigma_{k}^{\frac{1}{2}} V_{k}^{T} $$ $$ XY = A_{k} $$ $$ \| \Sigma - U^{T} A_{k} V\|_{F}^{2} $$ But those are unitary..we'll end up with $$\| \Sigma - \Sigma_{k} \|_{F}^{2} $$