When solving a system as:
$$Ax = b$$
Can be done by following ways.
Alternative 1:
$$x = (A^TA)^{-1}A^Tb$$
Alternative 2:
$$x = (A)^{-1}b$$
if $A$ is square. Let's assume that $A$ is square here. But there is a drawback by using the inverse. By using the inverse, it might give NaN or Inf values if determinant of $A$
$$det(A) = 0$$
So to solve that, we add a tuning factor
$$x = (A^TA +\alpha I)^{-1}A^Tb$$
Where $\alpha$ is a small number. But that equation above is not the same as.
$$x = (A + \alpha I)^{-1}b$$
They will not give the same results.
Question:
Where do I need to place the tuning factor $\alpha I$ if I want to place it in this equation:
$$x = (A)^{-1}b$$
So it will be equal as
$$x = (A^TA +\alpha I)^{-1}A^Tb$$
Reason to asking:
I have a matrix $A$ that are lower triangular matrix and I don't need to use inverse to solve for $x = (A)^{-1}b$, I can use Gaussian Elimination instead. But I'm am a need of a tuning factor for an equation like this: $x = (A + \alpha I)^{-1}b$ but the tuning factor is at wrong position in the equation I think.
It can be noted that when you have a singular $A$ in most cases for $b$ one can't have the exact equality $A\,x=b$. Only when $b$ lies in the span of $A$ does there exists a $x$ that satisfies the equality. So instead of solving an unsolvable equality would could try to minimize some cost function, such as
\begin{align} J(x) &= \|A\,x-b\|_2^2 \\ &= (A\,x-b)^\top (A\,x-b) \\ &= x^\top A^\top A\,x - 2\,x^\top A^\top b + b^\top b. \end{align}
However, when $A$ is singular this cost function does not have an unique optimum and would also allow $\|x\|\to\infty$. A simple way to penalize this is to add the term $\alpha\,\|x\|_2^2$, with $\alpha>0$. In this case one gets the unique solution
$$ x = (A^\top A + \alpha\,I)^{-1} A^\top b. \tag{1} $$
This solution won't make use of the fact that $A$ is lower triangular, which in the non-singular case can be easily solved using back substitution. Your proposed solution of
$$ x = (A + \alpha\,I)^{-1} b, \tag{2} $$
does allow you to make use of the lower triangular property, however you do have to be careful that $-\alpha$ isn't an eigenvalue of $A$, in which case you are still dealing with a singular matrix. Instead a more robust alternative would be to only replace the diagonal elements of $A$ that are zero with $\alpha$ $(3)$. Namely the main goal of adding $\alpha$ is to make the systems of equations solvable.
However the question remains how well these methods allow one to find a solution for $A\,x\approx b$. For example when I take a random seven by seven lower triangular matrix with four of the diagonal elements set to zero I obtained the figure below. From that figure it can be noted that method 2 has a couple spikes, which can be explained by that $-\alpha$ is close to one of the eigenvalues of $A$. Besides this method 2 and 3 behave very similar, namely the norm grows as $\alpha$ becomes smaller, which seems to indicate that both are not a good candidate for finding $x$ such that $A\,x\approx b$.
Maybe one could do some clever and not computationally expensive transformation which reduces the problem such that $x$ does not lie in the null space of $A$ and still make use of the fact that the matrix in lower triangular. However I could not think of an easy way of doing this. But I hope my answer does gives some insights of how one might approach this problem.