I have been taught that if there is a $0$ where a pivot should be then no LU decomposition exists, even if you row swap. Why is this true? If you can row swap to get all non zero pivots then why is there no LU decomposition? Here is an example where we need to row swap in order to have a non zero pivot: $$ \begin{bmatrix} 0 & 1 \\ 2 & 3 \\ \end{bmatrix} $$
Why is row swapping not allowed when finding the LU decomposition?
9.9k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 3 best solutions below
On
Let's assume that you have transformed matrix into shape
$$ \begin{bmatrix} 2 & 3 \\ 0 & 1 \\ \end{bmatrix} $$ When you want to calculate l11 you would have to divide with 0.
On
You are possibly the victim of small misunderstanding which is the result of reading two different texts, one written by a pure mathematician, and one written by a mathematician who specializes in numerical analysis.
The pure mathematician will say that the matrix $A$ has an $LU$ factorization if it can be written in the form $A = LU$ where $L$ is unital lower triangular matrix and $U$ is upper triangular. The textbook will typically show that the factorization is exists and that it is unique iff $A$ and all its principal minors are nonsingular. Since the factorization is unique, it becomes appropriate to refer to the LU decomposition of $A$.
The numerical analyst, however, is interested in giving you are numerically reliable way of solving linear systems of $Ax = b$. Towards that end, the textbook will prove that for any non singular matrix $A$ there exists a permutation matrix $P$, a unital lower triangular matrix $L$ and an upper triangular matrix $U$, such that $PA = LU$. The numerical analyst will refer to this as an $LU$ factorization of the matrix $A$ as there is some freedom when choosing the permutation.
It is entirely possible for the $LU$ decomposition to exists, yet be entirely useless for practical computations.
One way of viewing the LU decomposition is that each of the row multiplication-addition operations is given by multiplication by an "elementary matrix", which has 1s on the diagonal, and zeros in every entry off the diagonal except for one. This special entry's position is based on which row you are adding to which other row, and its actual value is the coefficient of the row being added. The way the row operations work, each of these elementary matrices is lower triangular. This implies that their product and also their inverse are lower triangular. So the procedure for finding the LU decomposition formally looks like:
$$E_N E_{N-1} \dots E_1 A = U \Rightarrow E_1^{-1} E_2^{-1} \dots E_N^{-1} U = A.$$
That big aggregate of inverses is $L$. When you do a row swap, it is given by a permutation matrix, and then the aggregate of inverses of the operations which were performed is no longer lower triangular. Accordingly, when you permute rows, you refer to the PLU decomposition, where $P$ is a permutation matrix.
For some context, in robust numerical implementations of Gaussian elimination, row interchanges are performed to make the pivot as large as possible within the available entries in the column in question, rather than only doing so when a pivot is zero. This technique, called partial pivoting, helps to reduce a phenomenon called swamping. It is called partial pivoting because this method permutes only rows rather than both rows and columns.