I have generated in python some matrices with doubly block circulant structure (i.e. also the blocks themselves are circulant). These matrices are square, sparse and with coefficient drawn from lognormal distribution. For their generation I use the functions scipy.sparse.random and sporco.linalg.block_circulant from this library
The thing is that I want them to be also symmetric and positive definite, and i am having troubles with this latter property. Considering one of these matrices, to make it SPD I execute the following (consider matrix to be a numpy array):
if not is_symmetric(matrix):
matrix = (matrix + matrix.T) / 2
if not is_positive_definite(matrix):
eigenvalues = np.linalg.eigvals(matrix)
min_eigenvalue = np.min(eigenvalues)
matrix += (np.abs(min_eigenvalue) + 1) * np.eye(matrix.shape[0])
But in this way, my sparse matrix becomes much more dense (to make it clear: if the matrix is 1024x1024 with 2000 nonzero entries, after making it SPD the sparsity is reduced to 12000 nonzero entries).
Is there a way to target specifically the nonpositive eigenvalues so to not manipulate too much the matrix? I tried SVD but I lose the sign information. I tried also the matrix.diagonalize() method from the sympy library but I face numerical errors. Any other tips?
If I understood correctly, $A$ is a "doubly block circulant matrix" if it has the form $$ A = \pmatrix{ A_0 & A_{n-1} & \cdots & A_2 & A_1 \\ A_1 & A_0 & A_{n-1} & & A_2 \\ \vdots & A_1 & A_0 & \ddots & \vdots \\ A_{n-2} & & \ddots & \ddots & A_{n-1} \\ A_{n-1} & A_{n-2} & \cdots & A_1 & A_0 \\ }, $$ (i.e. it is block circulant) and each $A_0,A_1,\dots,A_{n-1}$ is itself an $m \times m$ circulant matrix. Notably, a $k \times k$ circulant matrix $C$ can be written in the form $C = f(P)$ where $f$ is a polynomial (whose coefficients form the first column of $C$) and $$ P_k = \pmatrix{ 0&0&\cdots&0&1\\ 1&0&\cdots&0&0\\ 0&\ddots&\ddots&\vdots&\vdots\\ \vdots&\ddots&\ddots&0&0\\ 0&\cdots&0&1&0}. $$ Suppose that we are given such a matrix $A$ and that $A$ is symmetric. With that, we know that $A_0$ is symmetric and $A_j = A_{n-j}^T$ for each $j = 1,\dots,n-1$. For each $j = 0,1,\dots,n-1$, let $f_j$ be the polynomial (of degree at most $m$) such that $f_j(P_m) = A_j$. We can write the matrix $A$ as $$ A = I \otimes A_0 + P_n \otimes A_1 + \cdots + P_n^{n-1}A_{n-1} = \sum_{j=0}^{n-1} P_n^j \otimes f_j(P_m), $$ where $\otimes$ denotes the Kronecker product. Write each polynomial $f_j$ as $f_j(x) = \sum_{k=0}^{m-1} a_{jk} x^k$. With that, we can express $A$ as $$ A = \sum_{j=0}^{n-1} a_{jk} P_n^j \otimes P_m^k. $$ Importantly, the matrices $P_n^j \otimes P_m^k$ (over integers $j,k$) are simultaneously diagonalizable with eigenvectors equal to the columns of $W_n \otimes W_m$, where $W_n$ denotes the DFT matrix of size $n$. Using this, we can deduce that the eigenvalues $A$ are equal to $$ \lambda_{p,q} = \sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{jk} \omega_n^{jp} \omega_m^{kq}, \quad p = 0,\dots,n-1 \quad q = 0,\dots,m-1, $$ where $\omega_\ell = \exp(2 \pi i/\ell) = \cos(2 \pi i/\ell) + i \sin(2 \pi i/\ell)$ (for $\ell = m,n$).
Let $B$ denote the $m \times n$ matrix $B = [a_{j,k}]_{j,k = 0}^{m-1,n-1}$. Then the eigenvalues of $A$ are simply the entries of the matrix $\sqrt{mn}W_m B W_n$. Note that the matrix $B$ can be calculated directly from $A$: the $j$th row of $A$ is equal to the first column of $A_j$.
With that, you can simply calculate all of the eigenvalues of $A$ directly with minimal computational overhead (complexity at most $O(\max\{m,n\}^3)$). In order to check whether $A$ is positive definite, find the minimum among these eigenvalues. In order to make $A$ positive definite, you can do what you have been doing: add $(1 + |\lambda_\min(A)|)I$ to $A$.
Potential further optimization: so far, the computational makes no use of the fact that $A$ is symmetric. Notably, we can state the following:
$A_j^T = A_{n-j}$ (for $j = 1,\dots,n-1$) implies that $$ a_{n-j,0} = a_{j,0}, \qquad a_{n-j,k} = a_{j,m-k} \quad k = 1,\dots,n-1 $$
$A_0^T = A_0$ implies that $$ a_{0,k} = a_{0,m-k} \quad k = 1,\dots,m-1. $$
For convenience, denote $a_{n,k} = a_{0,k}$ and $a_{j,m} = a_{j,0}$, which means that these identities can be condensed to the following for all $j$: $$ a_{n-j,k} = a_{j,m-k} \quad k = 0,1,\dots,m-1. $$ If you prefer, we can say more generally redefine our indices so that the indices "work cyclically" modulo $n$ and $m$ (respectively).
We can write \begin{align} 2\lambda_{p,q} &= \lambda_{p,q} + \lambda_{p,q} = \left(\sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{jk} \omega_n^{jp} \omega_m^{kq}\right) + \left(\sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{n-j,m-k} \omega_n^{jp} \omega_m^{kq}\right) \\ & = \left(\sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{jk} \omega_n^{jp} \omega_m^{kq}\right) + \left(\sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{j,k} \omega_n^{-jp} \omega_m^{-kq}\right) \\ &= \sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{jk} (\omega_n^{jp} \omega_m^{kq} + \overline{\omega_n^{jp} \omega_m^{kq}}) \\ &= 2\sum_{j=0}^{n-1}\sum_{k=0}^{m-1} a_{jk} \cos\left(2 \pi \left[\frac {jp}n + \frac{kq}{m}\right] \right) \end{align}