Stuck with LDU-factorization of a matrix where D should contain zeros

2.5k Views Asked by At

I thought that L-D-U- factorization of a square matrix (L=lower triangular factor, D=diagonal factors, U=upper triangular factor) was always possible and meaningful even if I encounter zeros on the diagonal factor D. But my algorithm is not correct in that cases in that the resulting factors do not reproduce the source.
Let $$ M = \begin{bmatrix} 1& 2& 3& 4& 5\\ 2& 4& 6& 8& 0\\ 3& 6& 9& 2& 5\\ 4& 8& 2& 6& 0\\ 5& 0& 5& 0& 5 \end{bmatrix}$$ which is just the top-left of the basic $10 \times 10$ multiplication-table (modulo $10$). It has full rank; but in the naive LDU-algorithm one would assign zeros to some entries of D because zeros occur in the top-left element of the intermediate matrices of the iteration. If I do that, I get LDU-components $$\small L=\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 \\ 3 & 0 & 1 & 0 & 0 \\ 4 & 0 & 0 & 1 & 0 \\ 5 & 0 & 0 & 2 & 1 \\ \end{bmatrix} D= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -10 & 0 \\ 0 & 0 & 0 & 0 & 20 \end{bmatrix} U=\begin{bmatrix} 1 & 2 & 3 & 4 & 5 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}$$ and if I put them together they don't reproduce the source M: $$ \text{chk}=L \cdot D \cdot U = \small \begin{bmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 4 & 6 & 8 & 10 \\ 3 & 6 & 9 & 12 & 15 \\ 4 & 8 & 12 & 6 & 0 \\ 5 & 10 & 15 & 0 & 5 \end{bmatrix}$$ The problem is not, that the matrix-rank of M were not sufficient - Pari/GP gives the inverse and even the diagonalization.

Q: Can this be repaired? Can a meaningful L-D-U-decomposition be given?

Of course, if a general argument exists why and when invertible/diagonalizable matrices cannot be LU or LDU-decomposed I'd like to learn that, too.

3

There are 3 best solutions below

0
On BEST ANSWER

If you look at my comments, you can see a permuted solution and the details of an $A = LDU$ factorization for your matrix.

This nice write-up includes the Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix and includes the answers to your question.

Lastly, here is a nice Corollary in the Matrix Analysis book by Horn and Johnson. Worth perusing all of Section $3.5$ in addition to the corollary on the $LDU$ factorization.

7
On

You have to do this by pivoting. In the second step of the LU or LDU decomposition, your matrix becomes M = \begin{bmatrix} 1& 2& 3& 4& 5\\ 0& 0& 0& 0& -10\\ 0& 0& 0& -10& -10\\ 0& 0& -10& -10& -20\\ 0& -10& -10& -20& -20 \end{bmatrix}

You will then need to multiply it by a permutation matrix $P=$ \begin{bmatrix} 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1\\ 0& 0& 0& 1& 0\\ 0& 0& 1& 0& 0\\ 0& 1& 0& 0& 0 \end{bmatrix} Then your lower triangular matrix $L_P$ becomes $PLP^{-1}$, and the $LDU$ decomposition formula is $PA=L_PDU$. Refer to this article about LU with pivoting.

Your $L$ looks almost correct, but there isn't a $2$ in the 5th row and 4th column. The $U$ should be \begin{bmatrix} 1& 2& 3& 4& 5\\ 0& 1& 1& 2& 2\\ 0& 0& 1& 1& 2\\ 0& 0& 0& 1& 1\\ 0& 0& 0& 0& 1 \end{bmatrix} And $D=$ \begin{bmatrix} 1& 0& 0& 0& 0\\ 0& -10& 0& 0& 0\\ 0& 0& -10& 0& 0\\ 0& 0& 0& -10& 0\\ 0& 0& 0& 0& -10 \end{bmatrix}

2
On

What I got by some manual pivoting is the following half-baked result: $$ L = U^t = \begin{bmatrix} 1& 0& 0& 0& 0\\ 2& 1& 0& 0& \frac12\\ 3& 1& 1& 1& \frac12\\ 4& 2& 0& 1& 1\\ 5& 0& 0& 0& 1 \end{bmatrix} $$

$$ D = \begin{bmatrix} 1&5&10& -10&-20 \end{bmatrix} $$ and indeed $$ M = L \cdot D \cdot U $$ However, $L$ and thus $U=L^t$ are not triangular and thus this is not a true LDU-decomposition.
Of course I could permute $L$ and $D$, but I had to permute then not only the columns but also the rows in $L$ and $D$, so denoting the permutations by $P$ and $Q$ I find by this $$ P^{-1}\cdot M \cdot P = (P^{-1} \cdot L \cdot Q ) \cdot ( Q^{-1} D Q ) \cdot (Q^{-1} \cdot U \cdot P ) \\ \text{ or } \\ M_p = L_p \cdot D_p \cdot U_p = L_p \cdot D_p \cdot L_p^t $$ where the parenthesized expressions are triangular (and incidentally $Q=P$) - but this means on the other hand, it is not really $M$ which I'm decomposing then.

update
Here is the triangular solution for $M_p$, taken by permutations:
$$ L_p = \begin{bmatrix} 1& 0& 0& 0& 0\\ 5& 1& 0& 0& 0\\ 2& \frac12& 1& 0& 0\\ 4& 1& 2& 1& 0\\ 3& \frac12& 1& 1& 1 \end{bmatrix} \\ D_p = \begin{bmatrix} 1& -20& 5& -10& 10 \end{bmatrix} \\ $$giving the permuted $M_p$ $$ M_p = L_p \cdot D_p \cdot L_p^t = \begin{bmatrix} 1& 5& 2& 4& 3\\ 5& 5& 0& 0& 5\\ 2& 0& 4& 8& 6\\ 4& 0& 8& 6& 2\\ 3& 5& 6& 2& 9 \end{bmatrix}$$ and my permutation $P$ based on manually selected pivot-elements as $$ P = Q = \begin{bmatrix} 1& 0& 0& 0& 0\\ 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 1\\ 0& 0& 0& 1& 0\\ 0& 1& 0& 0& 0 \end{bmatrix} $$


Now from the OP the question 2 remains: is that an example where we say: "M is not LDU-decomposable" (like we say that some matrices are "not diagonalizable" and admit only a Jordan-decomposition)? Are there (obvious) criteria for that "non-decomposability" (before actually trying it)?

After a short view it seems, the final answer (also to question 2) is in the article to which @variable has linked to.

update 2: upps that same problem occurs of course in the Cholesky-decomposition of M. The source might be found beforehand by observing, that some eigenvalues of M are negative. A valid cholesky- and LDU-decomposition can then simply be found from the matrix M^2 which has then positive eigenvalues. Hmm - from all this I'll have now to improve my program-code for the LDU as well as for the Cholesky-decomposition. Oh sigh...


Appendix: code for the example computation

// remark: script is for MatMate - language (proprietary)
//                                 Author: Gottfried Helms
macrodef onecolumn
    dx = value( R [piv,piv] )    // piv,piv must point to a nonzero element 
                                 // in the current residual matrix R here! 
                                 // to get the scalar value for the elem. in D
   D[piv,piv] = dx               // fill that value into matrix D
   L[*,piv] = R [*,piv ] /dx     // fill columnvector into matrix L 
   U[piv,*] = R [piv,* ] /dx     // fill rowvector into matrix U
   chk = L * D * U               // check partial reproduction of M
   R   = M - chk                 // make the residual the new version of R
macroend 


// create matrix M with size 5x5
dim=5
M=seq(1,dim)'      // define columnvector 1,2,3,4,5
   M = M *M '      // make M a matrix being the "multiplication table" 
   M = M  mod 10   // make it modulo 10
   M = 1.0 * M     // make it a real instead of an integer matrix
                   // better to work with real matrices

// Start example computation
// initialize ...
D = null(dim,dim)  // empty LDU-factors
L,U = D,D
R = M         // fill the Residualmatrix R for the iterations with M

// ... compute LDU-factors by 5 iterations
piv=1 // piv points to a nonzero diagonal element in the current residual R
macroexec onecolumn 

piv=5 //  I also could have taken =4 here
macroexec onecolumn 

piv=2
macroexec onecolumn 

piv=4
macroexec onecolumn 

piv=3
macroexec onecolumn 

// factors L,D,U are existent now, but are not triangular
// define a permutationmatrix P now by the above "piv"-settings
P = null(dim,dim)
P[1,1],P[5,2],P[2,3],P[4,4],P[3,5]  = 1, 1, 1, 1, 1

// permute matrices, applying "similarity permutations", because P' = P^-1
Lp = P' * L * P   // this gives a lower triangular version of L
Dp = P' * D * P   // Dp is still diagonal, only the diagonalentries are permuted
Up = P' * U * P   // Up is also simply the transpose of Lp
Mp = P' * M * P   // actually, only Mp becomes truly LDU-factored!
chk = Mp - Lp * Dp * Up   // check that difference is indeed zero