How is full pivoting more stable than partial pivoting

1.1k Views Asked by At

I know the difference between partial and complete, and I've read that complete is more "stable" / offers more "stability". In what sense is it more "stable"?

1

There are 1 best solutions below

0
On BEST ANSWER

There is a paper by Trefethen that describes the average case stability of Gaussian Elimination back in 1990 using random matrices. It does note at the beginning that average case analysis excludes worst-case possibilities. There are types of matrices mentioned at the beginning which have a large condition number with Gaussian elimination. .You can generate these matrices in Matlab. Nick Higham created a bunch of test matrices. It looks like the following using python ( I don't have Matlab anymore).

import numpy as np
import scipy.linalg as slin
n=8 
# generate the matrix
A = np.eye(n,n)- np.tril(np.ones(n),-1)
A[:,n-1] = 1
P,L,U= slin.lu(A)

this generates the following matrix

A
Out[19]: 
array([[ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  1.],
       [-1.,  1.,  0.,  0.,  0.,  0.,  0.,  1.],
       [-1., -1.,  1.,  0.,  0.,  0.,  0.,  1.],
       [-1., -1., -1.,  1.,  0.,  0.,  0.,  1.],
       [-1., -1., -1., -1.,  1.,  0.,  0.,  1.],
       [-1., -1., -1., -1., -1.,  1.,  0.,  1.],
       [-1., -1., -1., -1., -1., -1.,  1.,  1.],
       [-1., -1., -1., -1., -1., -1., -1.,  1.]])

L
Out[20]: 
array([[ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [-1.,  1.,  0.,  0.,  0.,  0.,  0.,  0.],
       [-1., -1.,  1.,  0.,  0.,  0.,  0.,  0.],
       [-1., -1., -1.,  1.,  0.,  0.,  0.,  0.],
       [-1., -1., -1., -1.,  1.,  0.,  0.,  0.],
       [-1., -1., -1., -1., -1.,  1.,  0.,  0.],
       [-1., -1., -1., -1., -1., -1.,  1.,  0.],
       [-1., -1., -1., -1., -1., -1., -1.,  1.]])

U
Out[21]: 
array([[  1.,   0.,   0.,   0.,   0.,   0.,   0.,   1.],
       [  0.,   1.,   0.,   0.,   0.,   0.,   0.,   2.],
       [  0.,   0.,   1.,   0.,   0.,   0.,   0.,   4.],
       [  0.,   0.,   0.,   1.,   0.,   0.,   0.,   8.],
       [  0.,   0.,   0.,   0.,   1.,   0.,   0.,  16.],
       [  0.,   0.,   0.,   0.,   0.,   1.,   0.,  32.],
       [  0.,   0.,   0.,   0.,   0.,   0.,   1.,  64.],
       [  0.,   0.,   0.,   0.,   0.,   0.,   0., 128.]])

You can then test the residual error like this.

import numpy as np
import scipy.linalg as slin
n=20 
# generate the matrix
A = np.eye(n,n)- np.tril(np.ones(n),-1)
A[:,n-1] = 1
P,L,U= slin.lu(A)

b = np.random.rand(n)

x = np.linalg.solve(A,b)
r = b - np.dot(A,x)

relative_residual = np.linalg.norm(r)/(np.linalg.norm(A)*np.linalg.norm(x))

keep on increasing the $n$ and see what happens.

How does full pivoting make it better? There is something called the growth factor denoted $\rho$

$$ \rho = \frac{\max_{i,j,k} | a_{j,k}^{k}|}{\max_{i,j} |a_{i,j}| } $$

where for a matrix $A$ the element $a_{i,j}^{k}$ denotes the element the matrix $A$ after the $k$th step in the elimination. If $\rho$ is not too large then it will be deemed stable. The above matrix for partial pivoting has a growth factor of at least $2^{n-1}$ . You can see this through the matrix size being $n=8$. I.e $2^{8-1} =128$

So I believe when you test it in average case analysis it is almost the same however in the worst-case analysis you will have much worse residual error.