Does a sparsified correlation matrix follow the Perron Frobenius theorem?

134 Views Asked by At

Consider an N x N correlation matrix which has been sparsified to retain only the sqrt(N)*N highest elements.

It has the following characteristics:
- Sparse
- Non-negative (values are positive real numbers in [0; 1])
- Symmetrical
- The diagonal is equal 1
- W elements are strongly connected. The remaining (N-W) elements have null vectors (the columns and rows at these entries are composed only of zeros, except the diagonal value).

An example matrix:

 1 1 0 0 0;  
 1 1 1 0 0;  
 0 1 1 1 0;  
 0 0 1 1 0;  
 0 0 0 0 1

W = 4 connected elements. Note how element 5 only has zeros in its columns and rows.

To give some context behind this question: During eigendecomposition of the N x N matrix: the eigenvector associated to the highest eigenvalue has only positive values (and is sometimes almost constant).

I am trying to find a reason for this oddity.

Is the N x N matrix considered to be irreducible? Does it follow the Perron Frobenius theorem?

If not, what other theorem is likely to explain the oddities observed with the eigenvector?

1

There are 1 best solutions below

4
On

Up to a permutation similarity, your matrix has the form $$ M = \pmatrix{A&0\\0&I_{N-W}} $$ where $A$ is the $W \times W$ submatrix corresponding to your strongly connected elements. Notably, $A$ is irreducible because it describes a strongly connected network, so the Perron-Frobenius theorem applies.

If $x$ (a column-vector) is the Perron eigenvector of $A$ and $\lambda$ the corresponding eigenvalue, then we will also have $$ M \pmatrix{x\\0} = \lambda \pmatrix{x\\0} $$ where I have used $\tilde x = (x,0)$ to indicate that we have added $0$s to the end of the vector $x$ to make it a vector in $\Bbb R^n$.

As long as the largest eigenvalue $\lambda$ of $A$ is greater than $1$, $\tilde x$ will be the unique eigenvector of $M$ corresponding to $\lambda$, which is also the largest eigenvalue of $M$. To see that this is the case, it suffices to note that the eigenvectors of a block matrix $\pmatrix{A&0\\0&B}$ will necessarily have the form $(x,y)$ where $Ax = \mu x$ and $By = \mu y$ for some value $\mu$.

As for the eigenvector being nearly constant: note that a stochastic matrix (a matrix with constant row-sums) will have a constant eigenvector. By the continuous dependence of eigenvalues/eigenvectors on matrix entries, if the row-sums of $A$ are close to being identical, then the maximal eigenvector of $A$ will be close to the constant eigenvector.