How can I solve $Ax = b$ aproximatively with the minimal norm if I got matrix like that?
$$ A =\begin{pmatrix} 1 & -1 & 0 \\ -1 & 0 & 1 \\ 0 & 1 & -1 \\ \end{pmatrix} $$
- $b^T = (1, 1, 1)$
$A$ is singular, but also symmetric so I cannot use this formula: $x_0 = A^T(AA^T)^{-1}b$, because $AA^T$ is also singular.
I was able to make the singular decomposition and the Moore-Penrose pseudoinverse, if it helps with this problem somehow. But I have no clue what to do from here.
Assuming first we don't know so much about this matrix in particular, we can solve the Tikhonov regularized normal equations:
$$({\bf A}^T{\bf A}+\lambda {\bf I}){\bf x = A}^T{\bf b}$$
This corresponds to solving the following norm minimization problem:
$${\bf x}_{optimal}=\min_{\bf x}\{\|{\bf Ax-b}\|^2_2 + \lambda\|{\bf x}\|^2_2\}$$
Because norms need be real and non-negative, we see that first term shall be $0$ if we get perfect solution. But in case perfect solution exists in a whole subspace, we need the second term to "punish" vector from growing too large in that subspace. The $\lambda {\bf I}$ term slightly punishes $L_2$ norm for becoming big for solution as long as we use a small, strictly positive real $\lambda$.
With some practice and engineering knowledge you will by ocular inspection see that the matrix is circulant which allows an orthogonal eigensystem of basis functions which are the complex exponentials (Fourier transform). The mean value ("DC-component") of this occurs for frequency $0$ which in this example corresponds to precisely the vector $\frac 1 3 [1,1,1]^T$. If we know all this then we can right away select $0$ vector as minimal norm solution.