The Numerical Issues of Computing The QR-Decomposition Using CholeskyQR

119 Views Asked by At

I have an application where I need to compute the QR-decomposition (in fact I only need $R$) of a matrix $A\in \mathbb{R}^{m \times n}$ ($m > n$). The matrix has a fixed number of columns, but the number of rows can vary from one run to another. The matrix rows are known one at a time.

I am working within an embedded environment with very restricted memory, so I intend to avoid storing the matrix and work with the rows as they become available. While researching QR algorithms I came across CholeskyQR where the $Q$ and $R$ matrices are computed as $$ R = Cholesky (A^TA) $$ and then $$ Q = AR^{-1} $$

I find this method particularly appealing because I can compute $A^TA$ on the fly. But if I understand it correctly, the method is generally avoided due to the ill-conditionning of the Gram matrix $A^TA$ and by extension $R$, which can result in $Q$ not being orthogonal.

I mainly have two questions

  1. If I only care about the $R$ matrix, and not intend to use it to solve any linear systems, are there any other numerical difficulties in using CholeskyQR to obtain $R$ or is safe to use.
  2. Are there other algorithms that are better suited for my application (computing only $R$ without storing the full matrix $A$)

Thank you in advance.