I would like to request resources where I can find more examples and possibly a proof of the following method/algorithm of calculating eigenvectors. I came across it during a seminar I attended, but I didn't get much information about the method.
Suppose you are given a matrix $M$ and the task is to find the eigenvectors. The first step is to find the eigenvalues using the usual way of finding the roots of the characteristic polynomial i.e $f(\lambda)= \det\left( M - \lambda I\right) = 0$.
After finding the eigenvalues, you then pick one eigenvalue for a start. Denote it $\lambda_1$. Then, for this eigenvalue, you create a matrix that contains two halves; a top one and a bottom one. The identity matrix goes in the top half and tacked below it is the matrix $ A - \lambda_1 I$. The order of the identity matrix is the order of the given matrix $M$. You then apply column operations to this matrix until you get a column or columns of zeroes in the bottom half of the matrix, as well as something similar to row-echelon form to either side of these columns of zeroes. By this method, the columns above the zero columns form a basis for $\ker M-\lambda_1I$.
Perhaps it will be clearer if I put it this way (I'm not sure how to properly write it using MathJax):
$$\begin{bmatrix} I & \\ \hline (M-\lambda_1I) & \end{bmatrix} \xrightarrow{\textrm {column operations}} \left[ \begin{array}{c|c} A & \ker (M-\lambda_1I)\\ \hline {C} & {0 \\ \vdots \\ 0 } \cdots \cdots {0 \\ \vdots \\ 0 } \end{array} \right] $$
Where $A$ is some matrix and $C$ can be a matrix in lower triangular form.
The part $\ker (M-\lambda_1I)$ will contain the eigenvector(s) corresponding to $\lambda_1$. To find the eigenvectors corresponding to the next eigenvalue, say $\lambda_2$, you can either do the same :
$$\begin{bmatrix} I & \\ \hline (M-\lambda_2I) & \end{bmatrix}\xrightarrow{\textrm {column operations}} \left[ \begin{array}{c|c} A_1 & \ker (M-\lambda_2I)\\ \hline {C_1} & {0 \\ \vdots \\ 0 } \cdots \cdots {0 \\ \vdots \\ 0 } \end{array} \right] $$
OR
Use the matrix $C$ obtained in the first part and set up the equation as follows:
$$ \begin{bmatrix} C & \\ \hline (M-\lambda_2I) C & \end{bmatrix} \xrightarrow{\textrm {column operations}} \left[ \begin{array}{c|c} A_2 & \ker (M-\lambda_2I)\\ \hline {C_2} & {0 \\ \vdots \\ 0 } \cdots \cdots {0 \\ \vdots \\ 0 } \end{array} \right] \tag{$\mathbf{*}$}$$
For $\lambda_3$ we can use $C_2$ and so on.
Example
Let $M = \begin{bmatrix}5 & 2 & 1\\-2 & 1 & -1\\ 3 & 2 & 3\end{bmatrix} $
The characteristic polynomial is $f(\lambda) = -(\lambda-2)(\lambda-3)(\lambda-4)$
For $\lambda_1 = 2$ we have
$$\left[ \begin{array}{c} I & \\ \hline M-\lambda_1I & \end{array} \right] = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \hline 3 & 2 & 1 \\ -2 & -1 & -1 \\3 & 2 & 1 \end{bmatrix} \rightsquigarrow \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \\ \hline 0 & 2 & 1 \\ 0 & -1 & -1 \\0 & 2 & 1 \end{bmatrix} \rightsquigarrow \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & -2 & 1 \\ \hline 0 \space | & 0 & 1 \\ 0 \space | & 1 & -1 \\0 \space | & 0 & 1 \end{bmatrix} $$
Therefore, reading off the vector above the zero column, we have that $\vec{v_1} = \langle 1; -1; -1\rangle$. For the remaining eigenvalues, we can either use the same process or the matrix to the right of the zero column $$C_1 = \begin{bmatrix} 0 & 1 \\ 1 & -1 \\ 0 & 1 \\ \end{bmatrix}$$
Therefore, for $\lambda_2 = 3$ $$ \begin{bmatrix} C_1 & \\ \hline (M-\lambda_2I) C_1 & \end{bmatrix} = \begin{bmatrix} C_1 & \\ \hline (M-3I) C_1 & \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & -1 \\ 0 & 1 \\ \hline 2 & 1 \\ -2 & -1 \\ 2 & 1 \end{bmatrix} \rightsquigarrow \begin{bmatrix} -2 & 1 \\ 3 & -1 \\ -2 & 1 \\ \hline 0 & 1 \\ 0 & -1 \\ 0 & 1 \end{bmatrix} $$
Therefore, again reading off the vector above the zero column, we have that $\vec{v_2} = \langle -2; 3; -2 \rangle$. We also have that $$C_2 = \begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$$
And finally for $\lambda_3 = 4$ we have
$$ \begin{bmatrix} C_2 & \\ \hline (M-\lambda_3I) C_2 & \end{bmatrix} = \begin{bmatrix} C_2 & \\ \hline (M-4I) C_2 & \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \\ 1 \\ \hline 0 \\ 0 \\ 0 \end{bmatrix} $$
Which is already in the needed form, meaning $\vec{v_3} = \langle 1; -1; 1 \rangle$
Solution :
$$ \begin{bmatrix}1\\-1\\-1 \end{bmatrix}, \begin{bmatrix}-2\\3\\-2 \end{bmatrix}, \begin{bmatrix}1 \\-1 \\1\end{bmatrix} $$
The solution by computed by Symbolab is similar
I'm unsure of how to modify this algorithm in special cases such as when the matrix is not diagonalisable etc. Therefore, I'd like to request any resources that have more information on this method/algorithm. It could be more examples of the use of this method and/or the proof of why it works because I don't get the intuition yet. Any information will be highly appreciated.
As described, the process feels a bit ad-hoc, so let’s make it a bit more deterministic:
Note that I’ve swapped the two halves of the augmented matrix from the original in your question.
The results in this answer provide the necessary tools to analyze this algorithm. From there, we know that $\operatorname{Col}(B_i)=\operatorname{Nul}((M-\lambda_iI)C_i)$, and moreover, the columns of $B_i$ are linearly independent, so form a basis for this space.
We have $$C_{i-1}E_i = \left[\begin{array}{c|c}A_i&B_i\end{array}\right]$$ and $$(M-\lambda_iI)C_{i-1}E_i = (M-\lambda_iI)\left[\begin{array}{c|c}A_i&B_i\end{array}\right] = \left[\begin{array}{c|c}(M-\lambda_iI)A_i&(M-\lambda_iI)B_i\end{array}\right] = \left[\begin{array}{c|c}C_i&0\end{array}\right]$$ so $(M-\lambda_iI)B_i=0$, therefore $\operatorname{Nul}(M-\lambda_iI)=\operatorname{Nul}((M-\lambda_iI)C_{i-1})=\operatorname{Col}(B_i)$.
You can get a feel for what’s going on here by examining the sequence of spaces $W_i=\operatorname{Col}(C_i)$. Loosely speaking, we start off with the entire ambient space, spanned by the columns of $W_0=I$ and with each iteration reduce this space by removing the eigenspace of $\lambda_i$. Eigenspaces of distinct eigenvalues intersect only trivially, so we can use this reduced space to compute the next eigenspace instead of going back to the original ambient space each time.
The above assumes that $M$ is diagonalizable. If there are defective eigenvalues, you can certainly compute the pure eigenvectors with the above algorithm, but no simple extension that would allow you to compute the generalized eigenvector chains needed for a Jordan decomposition immediately comes to mind.