I am currently studying for a test and often computation of large matrices (4x4 +) is expected. More often than not there is a "quick" way to solve these such that a lot of hand computation is not necessary. I worked through a practice problem with a given solution and the solution is much more efficient than mine, but I do not understand why the method used is valid and how to identify I could apply such a method.
Consider the system
$A\vec{x}=\frac{1}{5} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 & 1 & 1 \\ \end{bmatrix}\vec{x}$.
To find the non trivial fixed point of the system, which is equivalent to the eigenvector for $\lambda=1$ the given solution is starts off like gaussian elimination and slowly considers smaller subsystems of the matrix, until reaching a 2x2 matrix which it solves for directly and then applies back substitution to find the eigenvector $\vec{x}^T=(1,1,1,1,1,1,1)$.
Here is an image of the full given solution
.
What I don't understand is why is it valid to just consider the subsystem corresponding to rows 2-4? I had expected The subsystem for rows 2-6 to be considered and row reduction repeated until the matrix is in upper triangular form and then apply back substitution. It seems the method used here is similar to gaussian elimination (at least the form I am familiar with) but deviates. I'd like to understand why / what's happening as it seems a lot more efficient than first putting the matrix into upper triangular form and then solving.