Consider a matrix $A \in \mathbb{R}^{m\times n}$ with $m < n$. Suppose there always exists at least one matrix $B = [b_{ij}] \in \mathbb{R}^{n\times m}$ such that $AB = I$. How to choose the matrix $B$ such that $AB = I$ and $$\max_j \sum_i b_{ij}^2 $$ is minimised? It seems that the optimal $B^*$ is always the Moore–Penrose pseudoinverse of $A$ from trial and error in Matlab. Would someone kindly tell me why? Thank you very much!
2026-03-26 14:20:37.1774534837
pseudo inverse with the minimum $l_2$ norm for each column
625 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in MATRICES
- How to prove the following equality with matrix norm?
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Powers of a simple matrix and Catalan numbers
- Gradient of Cost Function To Find Matrix Factorization
- Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Form square matrix out of a non square matrix to calculate determinant
- Extending a linear action to monomials of higher degree
- Eiegenspectrum on subtracting a diagonal matrix
- For a $G$ a finite subgroup of $\mathbb{GL}_2(\mathbb{R})$ of rank $3$, show that $f^2 = \textrm{Id}$ for all $f \in G$
Related Questions in OPTIMIZATION
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- optimization with strict inequality of variables
- Gradient of Cost Function To Find Matrix Factorization
- Calculation of distance of a point from a curve
- Find all local maxima and minima of $x^2+y^2$ subject to the constraint $x^2+2y=6$. Does $x^2+y^2$ have a global max/min on the same constraint?
- What does it mean to dualize a constraint in the context of Lagrangian relaxation?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Building the model for a Linear Programming Problem
- Maximize the function
- Transform LMI problem into different SDP form
Related Questions in PSEUDOINVERSE
- matrix pseudoinverse with additional term
- Connection between singular values, condition and well-posedness
- Sherman-Morrison formula for non-invertible bmatrices
- How to Find Moore Penrose Inverse
- Least squares partial derivatives to matrix form
- Inequality between inverses of positive (semi)definite matrices
- Solve $Ax=b$ for $A$
- Derivative of Frobenius norm of pseudo inverse with respect to original matrix
- For all $x,y\in\mathbb R^n$, $(xy^T)^+=(x^Tx)^+(y^Ty)^+yx^T$
- Need to do an opposite operation to a dot product with non square matrices, cannot figure out how.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Since $AB=I$, the range of $A$ is $R^{m}$, and thus the rank of $A$ is $m$.
We can use the compact form of the SVD to write $A$ as
$A=U\Sigma V^{T}$
where $U$ is $m$ by $m$ and orthogonal, $\Sigma$ is $m$ by $m$, diagonal, and non-singular, and $V$ is $n$ by $m$ with orthonormal columns.
The Moore-Penrose pseudoinverse of $A$ is
$A^{\dagger}=V\Sigma^{-1}U^{T}$.
It's easy to show that $AA^{\dagger}=I$. Note that this is not true if $A$ has rank less than $m$.
Any matrix $B$ such that $AB=I$ can be written as
$B=A^{\dagger}+N$
where each column of $N$ is in the null space of $A$.
The columns of $A^{\dagger}$ are in the span of $V$ while the columns of $N$ are in the perpendicular complement of the span of $V$. Thus each column $j$ of $A^{\dagger}$ is perpendicular to the corresponding column of $N$.
$A^{\dagger}_{j} \perp N_{j}.$
By the Pythagorean theorem,
$\| A^{\dagger}_{j}+N_{j} \|_{2}^{2}= \| A^{\dagger}_{j}\|_{2}^{2}+\|N_{j} \|_{2}^{2}.$
Since this is true for all columns, you can minimize
$\max_{j} \sum_{i=1}^{n}B_{i,j}^{2}$
by using $B=A^{\dagger}$. Furthermore, by adding up the columns we get
$\| B \|_{F}^{2}=\| A^{\dagger} \|_{F}^{2}+\| N \|_{F}^{2}.$
Clearly, $B=A^{\dagger}$ minimizes $\| B \|_{F}^{2}$.