I was reading a book on numerical methods and come across with this one:
Consider the least squares problem in Euclidean norm: $$\min_x \|b-Ax\|_2,$$ where we know that $A$ is ill-conditioned. Let's replace the normal equation with a better-conditioned system $$(A^TA + \gamma I)x_{\gamma} = A^Tb,$$ where $\gamma > 0$ parameter and $I$ is the appropriate size identity matrix.
The exercise is to show that $$\|x_{\gamma}\| \leq \|x\|$$
How can I show this?
My idea was that to write $x = (A^TA)^{-1}A^Tb$ and $x_{\gamma} = (A^TA + \gamma I)^{-1}A^Tb$ and tried to give an upper bound on $x_{\gamma}$ and have rewritten the problem as $$\frac{1}{\gamma}\|(N + I)^{-1}\| \leq \|N^{-1}\|$$
Assume that $A^T A$ is invertible. Then by the spectral theorem, for any unit vector $y$, $\| (A^T A + \gamma I)^{-1} y \|^2 = \sum_i c_i(y)^2 (\lambda_i+\gamma)^{-2}$ and $\| (A^T A)^{-1} y \|^2 = \sum_i c_i(y)^2 \lambda_i^{-2}$ where $c_i(y)=q_i^T y$ where $q_i$ are orthonormal eigenvectors of $A^T A$. The orthogonality is crucial here.
When $A^T A$ is not invertible you have a problem: if "inverse" means "Moore-Penrose pseudoinverse", then the zero eigenvalues of $A^T A$ contribute to $\| (A^T A + \gamma I)^{-1} y \|^2$ but not to $\| (A^T A)^{-1} y \|^2$. Maybe the method can be adapted to this case, but you'll need some auxiliary conditions somewhere. (For example, the result clearly fails if $A=0$ and we interpret $(A^T A)^{-1}$ as the Moore-Penrose pseudoinverse.)