Definiteness of symmetrix block matrix

168 Views Asked by At

Let $A$ and $D$ be symmetric positive definite matrices and consider the symmetric block matrix

$$ M := \begin{pmatrix} A & \alpha B \\ \alpha B^\top & D \end{pmatrix} $$

where $\alpha \in \mathbb{R}$ is a scalar parameter. Is it possible to say something about the positive definiteness of $M$ as a function of $\alpha$?

Because of the facts that (i) the eigenvalues are continuous w.r.t. the matrix parameters, and (ii) for $\alpha = 0$, $M$ is positive definite and its eigenvalues are those of $A$ and $D$ (since block diagonal), it seems that if $\alpha$ is "small enough", the matrix $M$ will be positive definite. Hence, it seems possible to relate the positive definiteness of $M$ to conditions on the smallest eigenvalue $\lambda_\min$ of $A$ and $D$ and some metric of $\alpha B$ (maybe some norm). Does someone have an idea?

I was thinking on the Schur complement and looking at the matrix

$$A - \alpha^2 B D^{-1} B^\top$$

but I have difficulties showing the positive definiteness of that as a function of $\alpha$. However, again it is clear that if $\alpha=0$, everything works out. Does someone have some idea?

3

There are 3 best solutions below

2
On BEST ANSWER

Let me write $M_{\alpha}$ for $M$ to stress the $\alpha$-dependence. Denote the smallest eigenvalue of $M_{\alpha}$ by $\lambda_{\alpha}$ and note that $\lambda_{\alpha} = \inf\limits_{\|x\| = 1} \|M_{\alpha}x\|$. From this it is easy to see that $$|\lambda_{\alpha} - \lambda_0| \leq \|M_{\alpha}-M_0\| = \|\alpha B\|.$$ Hence, if $\|\alpha B\| < \lambda_0$, then $\lambda_{\alpha} > 0$.

2
On

Firstly, there exists an invertible matrix $P$ such that $$M=P^T\left( \begin{matrix} A& 0\\ 0& D-\alpha ^2B^TA^{-1}B\\ \end{matrix} \right) P.$$ Secondly,because $D$ is positive-definite and $\alpha ^2B^TA^{-1}B$ is self-adjoint, there exists an invertible matrix $C$ such that $$C^{T}DC=I_s$$ and $$C^{T}(B^TA^{-1}B)C=\mathrm{diag}\left\{ \lambda _1,\cdots ,\lambda _s \right\}$$

where $\lambda _1,\cdots ,\lambda _s$ are all the eigenvalues of $D^{-1}(B^TA^{-1}B)$.

Then you can just calculate whether $1-\alpha ^2\lambda_1,\cdots,1-\alpha ^2\lambda_s$ are greater than $0$ or less than $0$.


Some explanations:

If $T$ is $n\times n$ positive-definite and $U$ is $n\times n$ self-adjoint, then there exists an invertible matrix $C$ such that $$C^{T}TC=I_n$$ and $$C^{T}UC=\mathrm{diag}\left\{ \lambda _1,\cdots ,\lambda _n \right\},$$

where $\lambda _1,\cdots ,\lambda _n$ are all the eigenvalues of $T^{-1}U$.

proof:

Because $T$ is positive-definite, then there exist an invertible matrix $P$ such that $P^{T}TP=I_n$. Because $P^{T}UP$ is self-adjoint then there exist an orthogonal matrix $Q$ such that $Q^{T}P^{T}UPQ=\mathrm{diag}\left\{ \lambda _1,\cdots ,\lambda _n \right\}$.

We can choose $C=PQ$.

$C^{T}(\lambda T-U)C=\mathrm{diag}\left\{ \lambda-\lambda _1,\cdots ,\lambda-\lambda _n \right\}$. So $\lambda_i$ is the root of $|\lambda T-U|$. Because $T$ is invertible, $\lambda_i$ is also the root of $|\lambda I-T^{-1}U|$.


Back to the question, choose $T=D$, $U=B^TA^{-1}B$.

0
On

The following Schur complement is positive semidefinite.

$$ {\bf A} - \alpha^2 {\bf B} \, {\bf D}^{-1} {\bf B}^\top \succeq {\bf O} $$

Since the matrix $\bf A$ is symmetric and positive definite, it has a (symmetric and invertible) square root ${\bf A}^\frac12$. The inequality above can be rewritten as follows

$$ {\bf I} - \alpha^2 {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-1} {\bf B}^\top {\bf A}^{-\frac12} \succeq {\bf O} $$

and, thus,

$$ \alpha^2 \leq \color{blue}{\frac{1}{\lambda_{\max} \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-1} {\bf B}^\top {\bf A}^{-\frac12} \right)}} =: \alpha_{\max}^2$$

and $\alpha \in [-\alpha_{\max}, \alpha_{\max}]$. Using the other Schur complement, ${\bf D} - \alpha^2 {\bf B}^\top {\bf A}^{-1} {\bf B}$, yields the same bound, as

$$ \begin{aligned} \alpha^2 \leq \color{blue}{\frac{1}{\lambda_{\max} \left( {\bf D}^{-\frac12} {\bf B}^\top {\bf A}^{-1} {\bf B} \, {\bf D}^{-\frac12} \right)}} &= \frac{1}{\lambda_{\max} \left( \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-\frac12} \right)^\top \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-\frac12} \right) \right)} \\ &= \frac{1}{\lambda_{\max} \left( \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-\frac12} \right) \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-\frac12} \right)^\top \right)} \\ &= \frac{1}{\lambda_{\max} \left( {\bf A}^{-\frac12} {\bf B} \, {\bf D}^{-1} {\bf B}^\top {\bf A}^{-\frac12} \right)} = \alpha_{\max}^2 \end{aligned} $$


Related: Matrix relations involving trace and eigenvalues