Numerical stability by solving with pivoting and without pivoting for lower triangular matrices where 1's at the diagonal?

253 Views Asked by At

Assume that we are going to solve $Ax=b$ and $A$ is lower triangular and $1$ at the diagonal. It's sounds perfect! Easy to solve.

That can be solved by using $$x_{k+1} = b_k + \sum_{i=0}^kA(k,i)x_i$$

Where $x_0 = b_0$

But then I read this report about Observer Kalman Filter identification at page 324. http://people.duke.edu/~hpgavin/SystemID/References/Juang+Phan+etal-JGCD-1993.pdf

enter image description here

So if $A$ is lower triangular, with $1$ at the diagonal. It still better to solve with QR or LUP or SVD? Even if I can solve it with back substitution?

Here is an example. The red line should fade to zeros, but it doesn't.

enter image description here

Formula that cause this trouble:

$$Y_k^o = -Y^{(2)}_k + \sum_{i = 0}^{k-1}Y^{(2)}_i Y^o_{k-i-1}$$

1

There are 1 best solutions below

1
On BEST ANSWER

A nonsingular lower triangular linear system $Lx = b$ can be solved using forward substitution. If $L$ is an $n$ by $n$ nonsingular lower triangular matrix, then the computed solution $\hat{x}$ satisfies a perturbed linear system $$(L + \Delta L)\hat{x} = f,$$ where $$|\Delta L| \leq \gamma_{n} |L|, \quad \gamma_n = \frac{nu}{1-nu}$$ and $u$ is the unit roundoff. This inequality should be understood in the componentwise sense, i.e., $$|(\Delta L)_{ij}| \leq \gamma_{n} |L_{ij}|.$$ We see that forward substitution produces the exact solution of a problem which is very close to our problem. Formally, we say that forward substitution is componentwise relative backward stable. This is as good as it gets in numerical analysis. In particular, there is no reason to deploy solvers which are based on $QR$ or $LU$ factorization of $L$.

There is no guarantee that the computed solution $\hat{x}$ is close to the true solution $x$, unless the appropriate condition number is small. In this context, Skeel's condition numbers $\text{cond}(L,x)$ and $\text{cond}(L)$ are suitable. By definition, we have $$\text{cond}(L,x) = \frac{\| |L| |L^{-1}| |x| \|_\infty}{\|x\|_\infty}$$ and $$\text{cond}(L) = \| |L| |L^{-1}| \|_\infty.$$ In terms of Skeel's condition numbers, we have the forward relative error bound $$ \frac{\|x - \hat{x}\|_\infty}{\|x\|_\infty} \leq \frac{\text{cond}(L,x) \gamma_n}{1 - \text{cond}(L)\gamma_n}.$$ This bound can be significantly better than the standard error bound which involves the classical condition number $\kappa_\infty(L) = \|L\|_\infty \|L^{-1}\|_\infty$.

The results quoted here can all be found in Higham's book "Accuracy and stability of Numerical Algorithms". Free versions of the 2nd edition are readily available online. The central chapters are 3, 7 and 8.


I have not read the paper cited in the original question. I suspect that the problem which you experience is not in your linear solver. In your case, I would test my linear solver by constructing linear systems with known solutions and verifying that the computed solutions satisfies the bound given above. This is done as follows. Choose $x$. Compute $f = Lx$ and feed $L$ and $f$ to your solver to produce $\hat{x} \approx x$.