Is this type of matrix always positive semidefinite?

309 Views Asked by At

Let $b\in[0,1]^n$, with $n\in\mathcal{N}^*$. Consider the symmetric matrix $A=(a_{i,j})$, defined by $$ a_{i,j}=\begin{cases}\min\lbrace b_i,...,b_j\rbrace &\mbox{if } i<j\\ \min \lbrace b_j,...,b_i\rbrace &\mbox{if } j\leq i\end{cases} $$ Is $A$ always positive semidefinite?

For instance, the matrix generated by the vector $[0. 1, 0.2, 0.3]$ is $$ \left(\begin{matrix} 0.1 & 0.1 & 0.1 \\ 0.1 & 0.2 & 0.2 \\ 0.1 & 0.2 & 0.3 \\ \end{matrix}\right) $$ while the one generated by the vector $[0. 2, 0.1, 0.3]$ is $$ \left(\begin{matrix} 0.2 & 0.1 & 0.1 \\ 0.1 & 0.1 & 0.1 \\ 0.1 & 0.1 & 0.3 \\ \end{matrix}\right)\text{.}$$ So far, I have only managed to show that $A$ is positive semidefinite when $b$ is sorted (either in increasing or decreasing order). The proof is by induction:

The statement clearly holds if $n=1$.

Induction step: Let's denote $A(b)$ the matrix generated when following the procedure described above with a vector $b$.

Let $n\in\mathcal{N}^*$. Assume that all the matrices $M\in\lbrace A(b), b\in [0,1]^{i}, i\in\lbrace 1,...n\rbrace, b \mbox{ sorted}\rbrace$ are positive semidefinite.

Consider then a sorted vector $b$ with $n+1$ elements and apply Sylvester's criterion: all the sub-matrices of $A(b)$ corresponding to principal minors belong to $\lbrace A(b), b\in [0,1]^{i}, i\in\lbrace 1,...n\rbrace, b \mbox{ sorted}\rbrace$, so the result holds.

I have also tried to generate counter-examples in the case where $b$ is not sorted, but unsuccessfully.

2

There are 2 best solutions below

0
On BEST ANSWER

Yes, it is positive semidefinite. This can be easily proved by mathematical induction on $n$.

To stress its size as well as its dependence on the parameters $b_i$s, let us denote the matrix generated by $(b_1,\ldots,b_n)\in[0,1]^n$ as described in your question as $A(b_1,\ldots,b_n)$. In the inductive step, let $b_i=\min(b_1,\ldots,b_n)$. Then \begin{aligned} A(b_1,\ldots,b_n) &=\left[\begin{array}{c|c|c} A(b_1,\,\ldots,\,b_{i-1}) &\begin{matrix}b_i\\ \vdots\\ b_i\end{matrix} &\begin{matrix}b_i&\cdots&b_i\\ \vdots&&\vdots\\ b_i&\cdots&b_i\end{matrix}\\ \hline \begin{matrix}b_i&\cdots&b_i\end{matrix}&b_i&\begin{matrix}b_i&\cdots&b_i\end{matrix}\\ \hline \begin{matrix}b_i&\cdots&b_i\\ \vdots&&\vdots\\ b_i&\cdots&b_i\end{matrix} &\begin{matrix}b_i\\ \vdots\\ b_i\end{matrix} &A(b_{i+1},\,\ldots,\,b_n) \end{array}\right]\\ &=\left[\begin{array}{c|c|c} A(b_1-b_i,\,\ldots,b_{i-1}-b_i)&0&0\\ \hline 0&0&0\\ \hline 0&0&A(b_{i+1}-b_i,\,\ldots,\,b_n-b_i) \end{array}\right]+b_iE \end{aligned} where $E$ denotes the all-one matrix. Since $b_k-b_i\in[0,1]$ for every $k$, we see that $A(b_1,\ldots,b_n)$ is positive semidefinite, by induction assumption.

0
On

The matrix is indeed positive semi-definite. To see this, we will show there is an alternative way of formulating this matrix as a sum of positive semi-definite matrices.

Let $b=(b_1,\ldots,b_n)\in [0,1]^n$ be the given vector. For simplicity, let's suppose all the components are distinct and positive; this is without loss of generality, as the components of $A$ are continuous and the set of positive semi-definite matrices is closed, so we can perturb the entries so they are distinct and positive, then take limits retain positive semi-definiteness. Let's also make the convention that $b_0=b_{n+1}=0$. For each $i=1,\ldots,n$, let $l(i)\leq i\leq r(i)$ denote the largest range of indices containing $i$ such that $b_i$ is minimal in this range, i.e. $$i= \arg\min_j\{b_{l(i)},b_{l(i+1)},\ldots,b_i,\ldots, b_{r(i)}\}\quad\land\quad b_{l(i)-1},b_{r(i)+1}<b_i.$$

Also, write $\tau(i)$ for $\max\{b_{l(i)-1},b_{r(i)+1}\}$. Finally, for $1\leq i\leq j\leq n$, let $J_{i:j}$ denote the $n\times n$ symmetric matrix with all-$1$ entries in the entries whose row and column indices are in the range $[i,j]$, and $0$ everywhere else.

Then you can check that \begin{equation} A=\sum_{i=1}^n (b_{i}-\tau(i))J_{l(i):r(i)}. \end{equation} For a sketch of why this is right, suppose $p\leq q$ without loss and consider the $(p,q)$ entry of $A$. Suppose $k= \arg\min\{b_p,\ldots,b_q\}$, and let's think about which terms in the sum have nonzero entries in the $(p,q)$ entry. This happens for every $i$ such that $p,q\in [l(i),r(i)]$. This can't happen for any $i$ such that $b_k<b_i$, as $[l(i),r(i)]$ cannot contain $k$, so cannot contain both $p$ and $q$. Evidently, the term corresponding to $b_k$ is nonzero, as $b_k$ was the minimal entry in the range $[p,q]$, and contributes $b_k-\tau(k)$. Then the unique index $j$ whose value corresponds to $\tau(k)$ is also such that $p,q\in [l(j),r(j)]$, as its interval must contain that of $k$ by construction, so we also get the term $b_j-\tau(j)$, at which point we consider the index corresponding to $\tau(j)$, which in fact must be the other term considered in our construction of $\tau(j)$ i.e. $\min\{b_{l(i)-1},b_{r(i)+1}\}$. Continuing in this fashion, the sum telescopes to just give $b_k$ (I leave it to you to check the details; the way we constructed $\tau$ makes the telescoping work). This shows $A$ is really as you defined above.

The positive semi-definiteness of $A$ is now immediate, as $J_{i:j}$ is positive semi-definite for any $1\leq i\leq j\leq n$, as it can be written as $\mathbf{1}_{i:j}\mathbf{1}^T_{i:j}$, where $\mathbf{1}$ is the vector with $1$s in the entries from $i$ to $j$ and zero elsewhere. These rank-one matrices are multiplied by nonnegative numbers and then summed, so $A$ is positive semi-definite.

Just to give an example, your first matrix is written as \begin{align} (0.1-0) J_{1:3}+(0.2-0.1) J_{2:3}+(0.3-0.2)J_{3:3}&=0.1\begin{pmatrix} 1 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 1 \end{pmatrix} +.1\begin{pmatrix} 0 & 0 & 0\\ 0 & 1 & 1\\ 0 & 1 & 1 \end{pmatrix} +.1\begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}\\ &=\begin{pmatrix} .1 & .1 & .1\\ .1 & .2 & .2\\ .1 & .2 & .3 \end{pmatrix}, \end{align} while your second example can be written as \begin{align} (0.2-0.1)J_{1:1}+(0.1-0)J_{1:3}+(0.3-0.1)J_{3:3}&= .1\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}+0.1\begin{pmatrix} 1 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 1 \end{pmatrix} +.2\begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}\\ &=\begin{pmatrix} .2 & .1 & .1\\ .1 & .1 & .1\\ .1 & .1 & .3 \end{pmatrix} \end{align}