I have a question regarding the manipulation of linear least squares equations. Suppose $x$ satisfies $$ A x = b $$ in a least-squares sense, where $A$ is $m \times n$ with $m > n$, $x$ is $n \times 1$ and $b$ is $m \times 1$. If I use a $QR$ factorization of $A$, I can write \begin{align*} A x &= b \\ QRx &= b \\ Rx &= Q^T b \\ x &= R^{-1} Q^T b \end{align*} and so it seems I can apply $Q^T$ and $R^{-1}$ to the rhs side of the equation without issue, since this is the well-known $QR$ least squares solution.
However, now consider the following weighted least squares problem, $$ D A y = D b $$ where $D$ is an $m \times m$ diagonal weighting matrix with positive values on the diagonal. In this case, it is not allowed to apply $D^{-1}$ to both sides, resulting in $Ay = b$, because we know that the solution $x$ satisfying $A x = b$ will be different from the solution $y$ satisfying the weighted system $D A y = D b$.
Can anyone explain why I can apply $Q^T$ and $R^{-1}$ to the equation in the former case, but I cannot apply $D^{-1}$ to the equation in the latter case?
Some observations here, or comments.
While it is true that if you use the $QR$ decomp and you have
$$Ax=b \\ QRx =b \\ Rx = Q^{T}b \\ x = R^{-1}Q^{T}b $$
how it actually works is through back substitution. So it never actually forms the inverse. $x$ is found in the following way. The matrix $R$ is upper triangular. So the solutions to the vector are computed recursively.
$$ x_{n} = \frac{b_{n}}{r_{nn}} \\ x_{i} = \frac{b_{i}}{r_{ii}} - \sum_{k=i+1}^{n} x_{k} \frac{r_{ik}}{r_{ii}}$$
In regards to the other part, the $QR$ decomp is a direct method. The weighting method iterative you have.
It would depend on which one is greater.
$ m > n$
$$ x = \bigg[A^{T}D^{T}DA \bigg]^{-1}A^{T}D^{T}Db $$ $ m < n $ $$ x = \bigg[D^{T}D\bigg]^{-1}A^{T}\bigg[A (D^{T}D)^{-1}A^{T} \bigg]^{-1}b $$