Update solution of linear system after changing right-hand side

323 Views Asked by At

Suppose I found the solution of a linear system:

$$A \vec x = \vec y$$

where $A$ is an $n\times n$ real matrix and $\vec y \in \mathbb{R}^n$ . You can assume that $A$ is positive definite and symmetric.

Now I want to solve a very similar linear system,

$$A \vec x' = \vec y'$$

where $\vec y'$ difers from $\vec y $ in only one coordinate, that is, $y'_i = y_i$ for all $i \ne m$, where $m$ is the updated coordinate. Is there an efficient way to obtain the new solution $\vec x'$?

I need to do something like this many times. By efficient I mean that I want to bypass inverting the matrix $A$, or solving the new system everytime $\vec y$ changes in a single coordinate.

P.S. If you have the Numerical Recipes book, check-out Equation 2.7.6, which is very related to this, to see the kind of approach I am expecting.

3

There are 3 best solutions below

0
On

Apply the Cramer's Rule to both Ax=y and Ax=y' and compare the results.

Notice that the determinant which appears at the denominator when you apply Cramer's Rule is the same for both systems.

The difference between the numerators is the difference of the two determinants which are the same except at one term.

The difference of the two determinant is found by multiplying the cofactor of the changed term and the net change of the said term.

Thus there is no need to find the inverse of A at all.

0
On

For a smart implementation of Cramer's rule you have to precompute $n$ cofactors (assuming you always change the same element of $b$), each of which (practically) takes $\mathcal{O}(n^{2.7})$ time, so the prework is $\mathcal{O}(n^{3.7})$. The actual work per step is $\mathcal{O}(n^{2})$.

Let me propose to use the Cholesky factorization: $A=LL^T$. Then for each $b$, no matter how much it changes, you just have to do 2 backsolves to solve $LL^Tx=b$. This is $\mathcal{O}(n^{3})$ prework, $\mathcal{O}(n^{2})$ per iteration, and much easier to implement than Cramer's rule.

2
On

Numerically it is more efficient to solve a system $A x = y$ using LU decomposition or row reduction than to invert $A$ and do matrix multiplication for the result $x = A^{-1} y$.

But in your case, you are repeating this operation many times which means you can cache the results of the LU decomposition and apply them over and over again.

In pseudo code this is

[L,U] = LUCoef(A)
loop i=1,n
    z{i} = LowerSolve(L, y{i})
    x{i} = UpperSolve(U, z{i})
end loop

Of course, you have to worry about pivoting and which variant of LU to use. The one with the diagonals on the L matrix, or the one with the diagonals on the U matrix.

If the size of $A$ is 5 or less, it is possible to get a direct inverse or system solve using a CAS system.

For example, a 2×2 system would have

function x = Solve2x2(A,y) 
    d = A(1,1)*A(2,2) - A(1,2)*A(2,1)       % get determinant
    x(1) = (A(2,2)*y(1) - A(1,2)*y(2))/d
    x(2) = (A(1,1)*y(2) - A(2,1)*y(1))/d
end

Example LU Decomposition

$$ A = \begin{bmatrix} 3 & -1 & \frac{3}{2} \\ 1 & 4 & 0 \\ \frac{1}{5} & -2 & 2 \end{bmatrix} $$

Diagonals on the lower triangular matrix

$$ A = L\,U = \begin{bmatrix} 1 & 0 & 0 \\ \frac{1}{3} & 1 & 0 \\ \frac{1}{15} & -\frac{29}{65} & 1 \end{bmatrix}\,\begin{bmatrix} 3 & -1 & \frac{3}{2} \\ 0 & \frac{13}{3} & -\frac{1}{2} \\ 0 & 0 & \frac{109}{65} \end{bmatrix} $$

Diagonals on the upper triangular matrix

$$ A = L\,U = \begin{bmatrix} 3 & 0 & 0 \\ 1 & \frac{13}{3} & 0 \\ \frac{1}{5} & -\frac{29}{15} & \frac{109}{65} \end{bmatrix}\,\begin{bmatrix} 1 & -\frac{1}{3} & \frac{1}{2} \\ 0 & 1 & -\frac{3}{26} \\ 0 & 0 & 1 \end{bmatrix} $$

The details of LowerSolve() and UpperSolve() depend on the implementation. A popular option is the Crout Method.