Computing inverse elements of symbolic matrices with binary variables

97 Views Asked by At

I'm working with symmetric, symbolic matrices $A$ with real coefficients and linear binary variables like

$$ A = \begin{pmatrix} 0.5x_0 & 0.3x_1+0.002x_2 & 0 & 0 \\ 0.3x_1 + 0.002x_2 & 0.3x_1 & 0.2x_2 - 0.1x_1 & 0 \\ 0 & 0.2x_2 - 0.1x_1 & 0.4x_1+0.1x_0 & 0.004x_1 - 0.5x_2 \\ 0 & 0 & 0.004x_1 - 0.5x_2 & 0.3x_2 - 0.2x_1 \end{pmatrix}, $$

where $x_i \in \{0,1\}$. To solve my problem, I would need $\mathcal{O}(1)$ entries of the inverse matrix, for the simplest setup only the $(A^{-1})_{1,1}$ element.

What I tried:

  1. Computing the determinant of the submatrix $A^{-1}_{1,1} \propto \det{A_{[2,\dots,N;2,\dots,N]}}$, since the denominator is irrelevant for me in case of a single needed entry. It works nicely for smallish matrices up to $A \in \mathbb{R}^{15\times 15}$, but does not finish computing for days using the sympy package for Python if the matrix becomes larger than that. Granted, there may be more efficient symbolic solvers, but the code that produces the matrices is in Python and converting seems a hassle. If the problem is ultimately the choice of framework, I'm happy for suggestions.
  2. Finding a block decomposition. Seems impossible for my kind of matrix, but I don't know how to check whether this is generally true.
  3. LU solve for a righthandside with only a single entry, like $b = (b_0, 0, 0, 0)$ for the above example, worse behavior than 1, since the solution vector is much larger than the number of binary variables in the matrix itself.

My current idea is to use an incomplete LU factorization (ILU) that is usually used as a preconditioning techniques for linear algebra solvers. The idea of preconditioners is to approximate the inverse of a matrix, so my hope is that I can perform one round of ILU, simplify the system by eliminating powers of the binary variables ($x_i^n = x_i$), then iterate to get an approximate inverse that is (hopefully) sufficient for my purpose.

Is there a better approach that makes use of the fact that the $x_i$ are binary? Is it even possible at all? Is there a way to efficiently solve a linear equation system when only the first entry of the solution vector is needed?

I feel like there should be some way, given that the matrix has lots of zeros (184/400 entries in a $20\times 20$ matrix for my problem are zero), there are only binary variables and all coefficients $c_j$ are restricted to $0 \leq |c_{j}| \leq 1$. On the other hand, symbolic matrix computations seem to produce insane numbers of terms and a wide range of orders of magnitude for the coefficients.