How can I compute recursive QR-factorization?

898 Views Asked by At

I wonder if it's possible to find the $Q$ and $R$ matrices from this QR-equation with only compute QR at one time only:

$$A = QR$$

if, the first column of $A$ got removed and then a new column got added to $A$.

I take it again: Assume that we first got our data matrix $A$ and we compute $Q$ and $R$. Then we change our data matrix $A$ by remove first column and add new data column to $A$. That means $Q$ and $R$ are going to change. Can we compute the new $Q$ and $R$ if we know the new data column of $A$ and the past $Q$ and $R$?

If you wonder what I got this question from. I got this from the paper Recursive Subspace Identification Algorithm using the Propagator Based Method, 2017.

1

There are 1 best solutions below

2
On BEST ANSWER

Yes, you can do exactly that using Givens rotations. Detailed derivations and explicit algorithms can be found here: http://eprints.ma.man.ac.uk/1192/1/qrupdating_12nov08.pdf.

The gist of it is something like this. Adding a column to the matrix is in some sense trivial: $$\begin{pmatrix}A & \vec v\end{pmatrix} = \begin{pmatrix}Q & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix} R & \vec v\end{pmatrix} = \tilde Q_0 \tilde R_0\,,$$ with $\tilde Q_0$ nicely orthogonal -- except for the pesky little detail that the new $\tilde R_0$ is not upper triangular. However, note that we can apply a set of rotations to both $\tilde Q_0$ and $\tilde R_0$, $$\tilde Q = \tilde Q_0 G_1 \dotsm G_k\,,\qquad \tilde R = G_1^T \dotsm G_k^T \tilde R_0\,,$$ that preserve both the product $\tilde Q_0 \tilde R_0 = \tilde Q \tilde R$ and the orthogonality of $\tilde Q$ but can have an effect on the triangularity of $\tilde R$. It's then just a matter of finding the proper sequence of rotations to turn $\tilde R_0$ into an upper-triangular matrix $\tilde R$.

For deleting a column, the logic goes backwards in some sense: if we could write $A = LQ$ as $$A = \begin{pmatrix}A_0 & \vec v\end{pmatrix} = \begin{pmatrix}Q_0 & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix}R_0 & \vec v\end{pmatrix}\,,$$ then removing a column would simply amount to ignoring the last column of $R$ and the last row and column of $Q$. We can use a sequence of Givens rotations to do just that.

One interesting observation is that when adding a column, you can update the $R$ factor without keeping track of the $Q$ factor. This isn't true for deleting columns, so if you can make your update not involve deletes you can save a lot of memory.