Let $A$ be an $n \times n$ positive semi-definite symmetric matrix with a non-trivial null space. Consider the problem \begin{equation} Ax = b \;\;\;\;\;\;\;\;\;\; (1) \end{equation}
State the condition that guarantees a solution to (1) will exist, and derive the condition on matrix $A$ that insures the iterative method, $$ x_{k+1} = x_k + \big( b - Ax_k \big), \;\; k = 0,1,2,\dots $$ will converge to a solution of (1), when a solution to (1) exists.
Never been faced with a problem like this, I am not sure where to start. I have tried to start several different ways, but they all seem to be nonsense. Any suggestions/solutions? Thanks in advance.
Since $A$ is symmetric, we can presume that it is diagonal for the purposes of the analysis.
Write $A= \begin{bmatrix} \Lambda & 0 \\ 0 & 0 \end{bmatrix}$, with $\Lambda$ invertible and $b= \begin{bmatrix} \beta \\ \gamma \end{bmatrix}$ in the appropriate basis.
Then $Ax=b$ has a solution iff $b \in {\cal R} A$ iff $\gamma = 0$.
Then if we write $x_k = \begin{bmatrix} y_k \\ z_k \end{bmatrix}$, the iteration becomes $y_{k+1} = (I-\Lambda) y_k + \beta$, $z_{k+1} = z_k$.
From this we can see that one sufficient condition for convergence from any initial point would be that the non zero eigenvalues of $A$ lie in $(0,2)$, or, in other words, all eigenvalues of $A$ must be less than $2$.