Factorization of symmetric positive matrix

181 Views Asked by At

I have the following question: Let $A$ be a symmetric positive definite real matrix. Then we are asked to show that there exits two matrices, $C$ and $D$ where $C$ is lower triangular with $1$s in the diagonal and $D$ is diagonal so that:

$$ A=CDC^T $$

The existence of this decomposition should follow from Cholesky decomposition, taking $D=\mathcal{I}$.

However, we are then asked if this decomposition is unique. It would seems so, but how does one actually prove it?

2

There are 2 best solutions below

2
On

Basically, I view this as applying certain row and column operations to $A$ to get $D$.

The row and column operations that are allowed are

  1. Add a multiple of row $i$ to row $j$.
  2. Then, add a multiple of column $i$ to column $j$.

Here the $i$ and $j$ are the same indices in 1 and 2. We have to do the same operation on the row and column. This is effectively doing $EAE^T$ for some elementary matrix $E$ and if we compose all of the elementary matrices we get $C$.

Note that this is not the same as elementary row and column operations; we don't switch rows/columns nor multiply by units. These two operations change the determinant of the matrix by a unit. In that sense, it's like a stricter version of Smith normal form but that's fine as our matrix $A$ is symmetric so things are easier.

I hope it's clear how you obtain your diagonal $D$. Firstly, you clear all the entries of your first column and row except the diagonal entry.

i.e. Add $A_{1,j}/A_{1,1}$ multiples of (row/column) $1$ to (row/column) $i$.

Why does this give uniqueness?

If I follow the algorithm, there ought to be one decomposition (none of the steps above give non-uniqueness of any sort). But perhaps, some other clever trick gives me another decomposition and I want to check they are the same.

So suppose $A=C_1 D_1 C_1^T=C_2 D_2 C_2^T$.

Observe that $(C_2^{-1}C_1) D_1 (C_2^{-1} C_1)^T=D_2$.

Actually, multiplying a upper triangular matrix with $1$s along the diagonal with another upper triangular matrix with $1$ along the diagonal should give another matrix of the same form. (Not quite the same notion, but it's the subgroup of a Borel subgroup of $GL_n(F)$. )

Same goes for the lower triangular matrices. So can I have two distinct diagonal matrices, $D_1, D_2$ such that $C D_1 C^T=D_2$ where $C$ is a lower triangular matrix with $1$ on the diagonal?

Here we have, $C^{-1} D_2= D_1 C^T$.

Since $C$ is lower triangular, $C^{-1}$ is lower triangular as well. And a diagonal matrix is also lower triangular. So, the L.H.S. is lower triangular. But for similar reasons, the right hand side is right triangular.

That means we have a diagonal matrix on both sides of the equality. If $C^{-1}D_1$ is diagonal, then $C^{-1}$ must be a diagonal matrix as well (because diagonal matrices form a multiplicative group).

But $C$ must have all $1$ along the diagonal. Thus, it must be the identity. That means, $D_1=D_2$.

2
On

I will only show the uniqueness since you say the rest is clear. I will use the fact $D$ is not only diagonal, but diagonal with nonzero entries. This is necessarily the case since if $D$ had zero entries, then $A$ would be only positive semidefinite.

Essentially, the uniqueness is because every step of Cholesky factorization is necessary to compute the decomposition. Formally, the uniqueness can be shown inductively, with the induction hypothesis that the $k\times k$ principal submatrices of $C$ and $D$ are uniquely determined. The $k=1$ case just corresponds to the fact that we must have $D_{11} = A_{11}$ (check this!).

Let's do the inductive step. Suppose that the $k\times k$ principal submatrix of $C$ and $D$ are uniquely determined. Let $(\cdot)_j$ denote the $j\times j$ principal submatrix. Block partition $A_{k+1}$, $C_{k+1}$, and $D_{k+1}$ as

$$ A_{k+1} = \begin{bmatrix} A_k & a \\ a^\top & \alpha \end{bmatrix}, \quad C_{k+1} = \begin{bmatrix} C_k & 0 \\ c^\top &1\end{bmatrix}, \quad D_{k+1} = \begin{bmatrix} D_k & 0 \\ 0^\top & \delta \end{bmatrix}. $$

By the lower triangularity of $C$, we have $A_{k+1} = C_{k+1} D_{k+1} C_{k+1}^\top$. Thus

$$ \begin{bmatrix} A_k & a \\ a^\top & \alpha \end{bmatrix} = \begin{bmatrix} C_k & 0 \\ c^\top &1\end{bmatrix} \begin{bmatrix} D_k & 0 \\ 0^\top & \delta \end{bmatrix} \begin{bmatrix} C_k^\top & c \\ 0^\top &1 \end{bmatrix} = \begin{bmatrix} C_kD_kC_k^\top & C_kD_kc \\ c^\top D_kC_k^\top & c^\top D_k c + \delta\end{bmatrix}, $$

By comparing (1,2) entries, we thus have the condition $C_kD_kc = a$. But $C_k$ and $D_k$ are invertible since $D_k$ is nonzero diagonal and $C_k$ is unit lower triangular. Thus $c$ is uniquely determined and equal to $c = D_k^{-1}C_k^{-1}a$. Then by comparing (2,2) entries, we have $c^\top D_kc + \delta = \alpha$ so $\delta$ is necessarily $\delta = \alpha - c^\top D_kc$. Thus $C_{k+1}$ and $D_{k+1}$ are uniquely determined, and the induction argument is complete.