I have a linear system which I know $\left[A\right]$ and $\left[B\right]$ and want to find $\left[X\right]$
$$ \left[A\right]_{(n+m) \times(n+m) } \cdot \left[X\right]_{n+m} = \left[B\right]_{n+m } $$
Unfortunately, $\det(A) = 0$, which means I cannot solve it.
$m$ of the eigenvalues of $A$ are zero and the other $n$ are non-zero
So, I search a way to transform $A$ and $B$ into
$$ \left[A\right] = \begin{bmatrix} \left[0\right]_{m \times m} & \left[0\right]_{m \times n} \\ \left[0\right]_{n \times m} & \left[A_{22}\right]_{n \times n} \end{bmatrix}; \ \ \ \ \ \ \ \ \left[B\right] = \begin{bmatrix} \left[0\right]_{m}\\ \left[B_{2}\right]_{n} \end{bmatrix} $$
And then find $\left[X_{2}\right]_{n}$ such
$$ \left[A_{22}\right] \cdot \left[X_{2}\right] = \left[B_{2}\right] $$
And mount
$$ \left[X\right] = \begin{bmatrix}[\underbrace{X_1}_{0}] \\ \left[X_2\right] \end{bmatrix} $$
My question is: How can I find $\left[A_{22}\right]$ and $\left[B_{2}\right]$ such that $\det(A_{22}) \ne 0$, and keeps the original equation $AX = B$?
Example 1:
$$ A = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 1 & 2 \end{bmatrix} \ \ \ \ \ \ \ B = \begin{bmatrix} 0 \\ 3 \\ 5 \end{bmatrix} \Rightarrow X = \dfrac{1}{3}\begin{bmatrix} 0 \\ 1 \\ 7 \end{bmatrix} $$
Example 2:
$$ A = \begin{bmatrix} 2 & 1 & -1 \\ 1 & 2 & 1 \\ -1 & 1 & 2 \end{bmatrix} \ \ \ \ \ \ \ B = \begin{bmatrix} 10 \\ 14 \\ 4 \end{bmatrix} \Rightarrow X = \begin{bmatrix} 0 \\ 8 \\ -2 \end{bmatrix} $$
PS: I'm searching for an algorithm in python to do it. I thought about computing the eigenvalues, getting those which are not zero, mounting the matrix $A_{22}$ with the non-zero eigenvectors and after that solving the system. But it seems costly to do it.
What you're describing is essentially a classic way to solve a rank-deficient linear system $AX = B$ (that is, when $A$ is square but it has a nontrivial null space) by finding a minimum norm, least-squares solution. If your linear system has a genuine solution, then it will be obtained as well.
Mathematically, this can be computed using the Moore-Penrose pseudoinverse. This is the linear transformation obtained by sending the range of $A$ to orthogonal complement of its null space (the map between these two subspaces is an isomorphism and represents the inverse of $A_{22}$ that you described in your question), and sending the orthogonal complement of the range to $0$.
Algorithmically, this can be computed by using the QR factorization or the Singular Value Decomposition. I'll describe the SVD approach here. By obtaining the factorization $A = U \Sigma V^*$, where $U,V$ are unitary matrices and $\Sigma$ is a diagonal matrix of singular values, then its pseudoinverse is given by $$ V \Sigma^{\dagger} U^*, \quad \Sigma^{\dagger} = \mathrm{diag}(\sigma_1^{-1}, \ldots, \sigma_k^{-1}, 0, \ldots, 0), $$ and where $\sigma_1, \ldots, \sigma_k$ are the non-zero singular values of $A$.
In Python, such algorithms are implemented as linear least squares solvers. See for example this implementation in Numpy.