The matrix is :
\begin{bmatrix}14&26&22&16&22\\26&50&46&28&40\\22&46&50&20&32\\16&28&20&20&26\\22&40&32&26&35\end{bmatrix}
The matrix is :
\begin{bmatrix}14&26&22&16&22\\26&50&46&28&40\\22&46&50&20&32\\16&28&20&20&26\\22&40&32&26&35\end{bmatrix}
Copyright © 2021 JogjaFile Inc.
This is an unanswered one year old question, but for completeness here is an answer based on my early days using a Brunsviga calculating machine: http://www.ssplprints.com/image/90036/brunsviga-calculating-machine-c-1950 Getting eigenvalues and vectors was supertedious, as you will see, with no margin for a slip of the fingers or wrist. So the methods were not the best from the algorithmic point of view - they tended to be the best given the mode of operation.
The method to use is the Power Method or the Rayleigh Quotient iteratively to get the largest eigenvalue. Then do a reduction of the matrix to get the next eigenvalue by the same process. So I won't do it all, but merely give the flavour.
Let $\{\mathbf{e}_n\}$ be a sequence of approximations to the greatest eigenvector $\mathbf{e}$. Start with $\mathbf{e}_0 = (1,1,\dots,1)^T$ $$\mathbf{e}_{n+1} = \mathbf{A} \mathbf{e}_n$$ and iterate. It helps, if you are doing this by hand, to normalise $\mathbf{e}_n$ so that the first component is unity at every step. So the sequence of $\mathbf{e}_n$ is a set of non-normalised eigenvectors. In this way the normalisation is the approximation to the $n^{th}$ eigenvalue. It's easier to do it than describe it so let's start.
The first iteration is $$ \left( \begin{array}{ccccc} 14 & 26 & 22 & 16 & 22 \\ 26 & 50 & 46 & 28 & 40 \\ 22 & 46 & 50 & 20 & 32 \\ 16 & 28 & 20 & 20 & 26 \\ 22 & 40 & 32 & 26 & 35 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{array} \right) = 100 \left( \begin{array}{c} 1 \\ 1.9 \\ 1.7 \\ 1.1 \\ 1.55 \\ \end{array} \right) $$ The second iteration is $$ \left( \begin{array}{ccccc} 14 & 26 & 22 & 16 & 22 \\ 26 & 50 & 46 & 28 & 40 \\ 22 & 46 & 50 & 20 & 32 \\ 16 & 28 & 20 & 20 & 26 \\ 22 & 40 & 32 & 26 & 35 \end{array} \right) \left( \begin{array}{c} 1 \\ 1.9 \\ 1.7 \\ 1.1 \\ 1.55 \\ \end{array} \right) = 152.5 \left( \begin{array}{c} 1 \\ 1.914 \\ 1.744 \\ 1.085 \\ 1.542 \\ \end{array} \right) $$ and the third is $$ \left( \begin{array}{ccccc} 14 & 26 & 22 & 16 & 22 \\ 26 & 50 & 46 & 28 & 40 \\ 22 & 46 & 50 & 20 & 32 \\ 16 & 28 & 20 & 20 & 26 \\ 22 & 40 & 32 & 26 & 35 \end{array} \right) \left( \begin{array}{c} 1 \\ 1.914 \\ 1.744 \\ 1.085 \\ 1.542 \\ \end{array} \right) = 153.4 \left( \begin{array}{c} 1 \\ 1.916 \\ 1.749 \\ 1.084 \\ 1.542 \\ \end{array} \right) $$ It looks like, to this precision, we have almost got there. The correct answer is $\lambda_{\max} = 153.567$ which is where the next iteration would get us. If we normalise the eigenvector we get $(0.300,0.571, 0.521,0.321)$. That's the answer given by anyof a number of eigenvalue calculators, eg: http://comnuan.com/cmnn01002/ and http://www.akiti.ca/Eig5Solv.html which I used to verify this.
The Rayleigh quotient method uses the approximation $$ \lambda_{\max} = \lim_{n\rightarrow \infty} \frac{\mathbf{e}_n^T\mathbf{A}\mathbf{e_n}}{\mathbf{e}_n^T\mathbf{e}_n} $$ Start with $\mathbf{e}_0 = (1,1,\dots,1)^T$, as before, and we get the first estimate $\lambda_1 = 725/5 = 145$, which is closer than the first guess from the power method above. We get the next eigenvector $\mathbf{e}_1$ corresponding to this $\lambda_1$ as above and repeat. The method converges faster than the power method. The power method is monotonous to execute and so is good if you insist on doing things ny hand - routine is was a good way of avoiding errors - this is not the kind of thing you want to do twice!
But we now need to get the next eigenvalue. To do this we use the follow trick. We take your matrix and create a new one, $\mathbf{A}_1$ using the eigenvector we have just found, which we will call $\mathbf{e}_1$ and the corresponding eigenvalue, $\lambda_1$. Then create the matrix $$ \mathbf{A}_1 = \mathbf{A} - \lambda_1 \mathbf{e}_1^T\mathbf{e}_1 $$ We easily verify that $\mathbf{A}_1\mathbf{e}_1 = 0$ and, if $\mathbf{e}_2$ is the next eigenvalue corresponding to eigenvalue $\lambda_2$, $\mathbf{A}_1 \mathbf{e}_2 = \lambda_2\mathbf{e}_2$.
The matrix $\mathbf{A}_1$ has the same eigenvalues as $\mathbf{A}$, except that it has $\lambda_1 = 0$. So the greatest eigenvalue of $\mathbf{A}_1$ is the second ranked eigenvalue of our original $\mathbf{A}$. The Power method as illustrated above gives us that eigenvalue and the corresponding eigenvector. We execute the same rigmarole as before by getting the next matrix in line, $\mathbf{A}_2$ which has three eigenvalues in common with $\mathbf{A}$.
I will, as the famous saying goes, leave that to the reader who has more stamina than I. This at least gives you a measure as to how much our lives have been improved by electronic computers. When Nobel prize-winner Chandrasekhar did this to calculate eigenmodes of stellar pulsations, he delegated the task to his "computer", Donna Ebert who did not get co-authorship of the paper, but merely a thank you. In those pre-WWII days women who did calculations were called "computers".