For a numerical problem I need the generalized eigendecomposition
$$\boldsymbol{A} = \boldsymbol{B P D P}^{-1}.$$
In some cases however, the matrix $\boldsymbol{B}$ becomes singular and I get infinite eigenvalues. As far as I understand, the generalized eigendecomposition should also work for singular matrices. Else I could just use the regular decomposition of the equivalent problem
$$\boldsymbol{B}^{-1}\boldsymbol{A} = \boldsymbol{P D P}^{-1}.$$
In my case $\boldsymbol{B}$ is a hermitian matrix and typically real, and $\boldsymbol{A}$ is a complex symmetric matrix. To be precise, consider the example:
$$\boldsymbol{B} = \begin{pmatrix} 1 & 1 \\ 1 & 1\end{pmatrix} \quad \boldsymbol{A} = \begin{pmatrix} 1+i & 2i \\ 2i & 1+ i \end{pmatrix}. $$
scipy gives me
from scipy import linalg
a = np.array([[1+1j, 2j], [2j, 1+1j]])
b = np.array([[1, 1], [1, 1]])
linalg.eig(a, b)
# (array([0.5+1.5j, inf+0.j ]),
# array([[-0.70710678-5.55111512e-17j, -0.70710678-5.55111512e-17j],
# [-0.70710678+5.55111512e-17j, 0.70710678+5.55111512e-17j]]))
My algorithm fails due to the infinite eigenvalue inf+0.j, so I hope to understand the meaning of this infinite eigenvalues to hopefully find a solution to work around it.
edit: the problem to solve
The actual problem I need to solve is of the form
$$ \boldsymbol{C}(v) = \left[\int dx \int dy \int dz (\boldsymbol{A}(v) - \boldsymbol{B}f(x,y,z))^{-1}\right]^{-1} $$
If it is possible to calculate the generalized eigendecomposition, this problem can be solved
$$\boldsymbol{C}(v) = \boldsymbol{BP}(v)\boldsymbol{I}(v) \boldsymbol{P}(v $$
with the digonal matrix
$$ I_{ij}(v) = \delta_{ij} \frac{1}{\int dx \int dy \int dz \frac{1}{D_{ii}(v)-f(x,y,z)}}$$
with a known solution for the elements $I_{ii}(v)$. A brute foce solution is not possible, it would require $\sim 10^{12}$ matrix inversions.
Symmetric matrices of all ranks exist including $1$ so it's not the symmetricity that is ruined.
If you do a regular eigenvalue decomposition on $\bf A$ you will find it has $2$ non-zero eigenvalues.
$\bf B$ only has $1$ non-zero eigenvalue. $\bf B$ being multiplied to the left will always remove one dimension with it's $0$ eigenvalue of whatever matrix is to the right of it. So even if ${\bf PDP}^{-1}$ is full rank, the multiplication with $\bf B$ will remove one dimension, making it unfittable to $\bf A$.