Why is this having numerical stability issues?

121 Views Asked by At

Following is part of a paper I am reading.

Background: $D$ is the degree matrix of a graph. $A$ is the adjacency matrix of that graph.

$D$ is a diagonal matrix where $D(i,i)$ is the summation of elements of the $i$th row in $A$, and $A$ is a sparse non-negative matrix.

I do not understanding why the matrix $I_N+D^{-1/2}A{-1/2}$ has stability issues when repeatedly applied to a vector $x$, and why simply change it to an equivalent $\tilde D^{-1/2} \tilde A \tilde D^{-1/2}$ helps. The paper claims the numerical issue is due to the former matrix has eigenvalue in $[0,2]$, but the eigenvalue is still in $[0,2]$ after the transformation.

I am not in this area but know the concept of numerical stability and conditional number. Any help is appreciated.

enter image description here

1

There are 1 best solutions below

0
On BEST ANSWER

It is not that important that the eigenvalues in both cases still belong to one and the same segment [0,2]. What does matter: the condition ratio, which is the ratio between the maximum and the minimum eigenvalues. The transformation suggested in the paper, apparently, was tested by the authors and found to be beneficial in terms of reducing the condition ratio significantly. As an example: suppose that in first case the maximum and the minimum eigenvalues equal 1.9999999 and 0.00000001 respectively, but after the transformation the maximum and minimum values are 1.9 and 0.1 . In the first case, the condition ratio is so large that even matrix-by-vector multiplication performed in double precision (16 decimal digits) won't have correct significant digits at all. In the second case, however, the matrix would be very well-conditioned, and the matrix-by-vector multiplication will deliver numerically excellent result.