Computer tools to solve linear system of equations with singular matrix

5k Views Asked by At

According to the theory I've read, if $A$ is singular, the equation $A\vec{x}=\vec{b}$ will have either zero or infinitely many solutions. I tried solving this equation for: $$A=\begin{bmatrix} 4 & 4 & 3 \\ -4 & -4 & -3 \\ 3 & 3 & 3 \end{bmatrix}, b=\begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} $$ Solving by hand gives $x=[-1, 1, 0] * x_2 + [-1, 0, 1]$. So one solution for $x_2 = 0$ would be $[-1, 0, 1]$ which works.

When I try to solve it using WolframAlpha, here, it says no solutions exists. When I try to solve it in python using np.linalg.solve, I get LinAlgError: Singular matrix.

How can I solve this type of equation for singular matrices using python or WolframAlpha? How come several computer programs how problems with this kind of equation?

3

There are 3 best solutions below

5
On BEST ANSWER

For symbolic computation the easiest way is to use rref:

In [74]: from sympy import Matrix
In [75]: Matrix([[4,4,3,-1],[-4,-4,-3, 1],[3,3,3,0]]).rref()[0]                                                                                                        
Out[75]: Matrix([
         [1, 1, 0, -1],
         [0, 0, 1,  1],
         [0, 0, 0,  0]])

And in Wolfram Alpha like this: rref{{4,4,3,-1},{-4,-4,-3,1},{3,3,3,0}}

Taking the LU decomposition only gets you halfway there, and it is easier to use rref in my opinion.

For numerical (floating point), which is what numpy uses, the two other answers gives the best way. This is because floating point calculations doesn't give exact answers, so some way to get an approximation is needed. This is done with different variants of the least-squares method.

Here is a discussion about why numpy does not include rref.

3
On

One way to solve such a problem is to ask for the solution $x$ with the smallest norm. The solution of $\min\{x^Tx : Ax=b\}$ can be obtained via the Lagrangian, and corresponds to the solution of: \begin{align} \begin{pmatrix}2I & A^T \\ A & O \end{pmatrix}\begin{pmatrix}x \\ \lambda\end{pmatrix} = \begin{pmatrix}0 \\ b\end{pmatrix} \end{align} For the general solution, you could compute the LU decomposition of $A$, and take it from there.

0
On

There are many solutions to this problem, I will use the concept of a pseudo-inverse described here (https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse).

import numpy as np

# initialize singular matrix and rhs vector
A = np.array([\
             [4., 4., 3.],\
             [-4., -4., -3.],\
             [3., 3., 3.]\
             ])
b = np.array([-1., 1., 0.])

# compute the pseudo inverse of A
# computationally intensive for large problems
Apinv = np.linalg.pinv(A)

# apply the pseudo-inverse to the rhs
# vector to obtain the `solution'
x = Apinv.dot(b)
print(x)

When I ran the above code, I obtained the solution [-0.5,-0.5,1.0], which can be verified by hand that it is a solution. You can use the same solution method in Mathematica by calling the PseudoInverse[] function (https://reference.wolfram.com/language/ref/PseudoInverse.html) and pass the array as argument and the pseudoinverse will be returned.

Comment on square systems: while if the matrix is singular it is possible to have nonunique solutions of the linear system, the pseudo inverse of a singular matrix is unique.