I just started learning linear algebra and I'm having a hard time figuring out why creating an elementary matrix to perform row operations on another matrix is necessary if we could just perform the row operations on the matrix itself. Also, which of the two methods would be more efficient?
2026-03-28 13:30:50.1774704650
Why use elementary matrices?
401 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in MATRICES
- How to prove the following equality with matrix norm?
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Powers of a simple matrix and Catalan numbers
- Gradient of Cost Function To Find Matrix Factorization
- Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Form square matrix out of a non square matrix to calculate determinant
- Extending a linear action to monomials of higher degree
- Eiegenspectrum on subtracting a diagonal matrix
- For a $G$ a finite subgroup of $\mathbb{GL}_2(\mathbb{R})$ of rank $3$, show that $f^2 = \textrm{Id}$ for all $f \in G$
Related Questions in GAUSSIAN-ELIMINATION
- When solving system's of equation what does t represent in this problem and when/why does it occur?
- What is the relation of between $REF(A)$ and $REF(A^T)$?
- Gauss-Jordan elimination to solve without employing pivoting
- Finding solution for a linear system(see below)
- Solving linear system after Gaussian elimination
- Left-looking Gaussian elimination
- inverse matrix with modulo
- inverse of a $2\times2$ matrix, Gaussian elimination with unknown $x$
- Gauss Jordan inverse matrix, row of all zeros
- Show that if A is strictly diagonal dominant, then a submatrix of A is also strictly diagonal dominant.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
It's obviously easier to just perform the elementary row operation on the matrix instead of creating a whole new matrix to represent the elementary row operation and performing a matrix-matrix multiplication. The reason linear algebra courses define elementary matrices is to help prove things about elementary row operations.
For instance, when solving a system of equations $Ax = b$ using Gauss-Jordan elimination, how do you know that performing elementary row operations won't change the set of solutions to the system of equations?
If we let $E$ be the elementary matrix corresponding to a row operation, then trivially, if $Ax = b$, then $EAx = Eb$. So any solution to the original system is still a solution after we've performed an elementary row operation. But how do we know that we haven't added any solutions? We know because the elementary matrix $E$ corresponding to each elementary row operation is invertible (check this), so $EAx = Eb$ implies that $E^{-1}EAx = E^{-1}Eb$, i.e. $Ax = b$. Hence, any solution to $EAx = Eb$ is also a solution to $Ax = b$.
Also, suppose we want to calculate the determinant of a square matrix $A$. Let's suppose we perform elementary row operations $E_1,E_2,\ldots,E_k$ that transform the matrix into an upper-triangular matrix $U$, so $E_kE_{k-1}\cdots E_2E_1A = U$. Then, since the determinant of a product of square matrices is the product of determinants, we have $\det(E_k)\det(E_{k-1})\cdots\det(E_2)\det(E_1)\det(A) = \det(U)$. The determinants of elementary row matrices are easy to compute, and since $U$ is upper-triangular, $\det(U)$ is just the product of the diagonal entries. So we know we can find the determinant of $A$ easily by row reducing it into upper-triangular form.
EDIT: While I was typing this, Qiaochu Yuan posted a comment which is a good summary of this answer.