Cholesky Decomposition for positive semidefinite separation

1k Views Asked by At

Cholesky decomposition is a common way to test positive semi definiteness of a symmetric matrix $A$. If the algorithm "goes wrong" trying to take a square root of a negative number, I know the matrix isn't PD.

How can I use this algorithm (any of the ways to represent it) to find a vector $x$ such that $x^TAx < 0$?

2

There are 2 best solutions below

1
On BEST ANSWER

Assume that the symmetric $A\in\mathbb{R}^{n\times n}$ can be partitioned as $$ A=\begin{bmatrix} \tilde{A} & a \\ a^T & \alpha \end{bmatrix}, \quad \tilde{A}\in\mathbb{R}^{(n-1)\times(n-1)} $$ and that $\tilde{A}$ is SPD. When looking for the Cholesky factor $L$ of $A$ in the conformingly partitioned form $$ L=\begin{bmatrix}\tilde{L} & 0 \\ l^T & \lambda\end{bmatrix}, $$ the algorithm won't fail during the first $n-1$ steps and gives you the Cholesky factorisation of the $\tilde{A}$ block ($\tilde{A}=\tilde{L}\tilde{L}^T$) together with the vector $l=\tilde{L}^{-1}a$ but fails to compute the diagonal element $\lambda$ because $\alpha-l^Tl=\alpha-a^TA^{-1}a<0$. However, you can obtain the following factorisation: $$ A=\hat{L}D\hat{L}^T, $$ where $$ \hat{L}=\begin{bmatrix}\tilde{L}&0\\l^T&1\end{bmatrix}, \quad D=\begin{bmatrix}I_{n-1}&0\\0&\delta\end{bmatrix}, \quad \delta=\alpha-l^Tl<0 $$ ($I_{n-1}$ is the $(n-1)\times(n-1)$ identity matrix). With this factorisation at hand (and $e_n=[0,\ldots,0,1]^T$ being the last column of $I_n$), you have $$ \delta=e_n^TDe_n=e_n^T\hat{L}^{-1}\hat{L}D\hat{L}^T\hat{L}^{-T}e_n=e_n^T\hat{L}^{-1}A\hat{L}^{-T}e_n=x^TAx<0, $$ where $x=\hat{L}^{-T}e_n$ is the last column of $\hat{L}^{-T}$ (or the transpose of the last row of $\hat{L}^{-1}$). Note that (since we can easily express $\hat{L}^{-1}$ using the partitioning of $\hat{L}$) this vector is given by $$ x=\begin{bmatrix} -\tilde{L}^{-T}l\\1 \end{bmatrix} = \begin{bmatrix} -\tilde{A}^{-1}a\\1 \end{bmatrix}, $$ so its computation requires only a backward solve with $\tilde{L}^T$.

In the general case, when the Cholesky factorisation fails "in the middle" of the matrix, assume that partitioning of $A$ in the form $$ A=\begin{bmatrix}A_{11} & A_{12} \\ A_{12}^T & A_{22}\end{bmatrix}, $$ such that the Cholesky factorisation fails in the last step of the factorisation of $A_{11}$. Using the same approach as above, you can find $x$ such that $x^TA_{11}x<0$ and hence with $y=[x^T,0]^T$, $y^TAy=x^TA_{11}x<0$.

4
On

Why Cholesky ? Use the Gauss method (decomposition in squares) to obtain an invertible matrix $P$ s.t. $P^TAP=diag(a_i)$ where $a_i=0,1$ or $-1$. If $a_k=-1$, then take $x=P_k$, the column with index $k$ of $P$.

EDIT: Pavel, Fabricio, you are wrong for $4$ reasons.

a) The method $LDL^T$ is a reasoning by recurrence (at each step a variable is removed) and we can stop the process when you obtain a negative coefficient: $x^TAx=u_1{f_1}^2+\cdots+u_k{f_{i-1}}^2-v{f_i}^2+\phi(x_{i+1},\cdots,x_n)$ where $u_i>0,v>0$. You obtain a solution $x$, solving $\{f_1(x)=\cdots=f_{i-1}(x)=0,f_i(x)=1,x_{i+1}=\dots=x_n=0\}$.

b) The previous method works even if $A$ is non-invertible. On the contrary, using the Cholesky method, if you obtain a zero coefficient before obtaining a negative coefficient, then you are dead !

c) The complexities of Cholesky method and $LDL^T$ method are similar. cf. http://arxiv.org/abs/1111.4144

d) Using $LDL^T$ method, you do not need to extract square roots.