Let $A$ with row vectors $a_1$, $\ldots, $ $ a_N$ be an $N \times N$ real symmetric matrix with zero row sum. For example,
$A= \begin{bmatrix} a_1 \\a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix} 2 & -1 & -1 \\-1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$.
We know that since $A$ is singular, a specific row of $A$, e.g., $a_N$, can be written as a linear combination of other rows. In this case $a_N=\beta_1 a_1 + \cdots + \beta_{N-1} a_{N-1} $. It is worth mentioning that the coefficients $\beta_j$ are calculated from $\beta = [\beta_1, \ldots, \beta_{N{-}1}] = a_N \tilde{A}^{\dagger}$, where $\tilde{A} \in \mathbb{R}^{{N-1} \times N}$ is obtained by removing the last row of matrix $A$ and $(.)^{\dagger}$ stands for pseudoinverse.
Now consider matrix $B$ with its elements $B{(i,j)}$ computed from $B{(i,j)} = A(i,j) +\beta(j) \thinspace A(i,N) \thinspace $ with $1 \leq i \leq N-1$ and $1 \leq j \leq N-1$.
My question is that why eigenvalues of $B$ are exactly the same as non-zero eigenvalues of $A$?
As an example, consider the above mentioned $A$. One can show that $a_3 = -a_1 -a_2$. Therefore, $\beta(1)=\beta(2) = -1$. Then
$B = \begin{bmatrix} 2 & -1 \\-1 & 1 \end{bmatrix} - \begin{bmatrix} -1 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 3 & 0 \\-1 & 1 \end{bmatrix}$.
Eigenvalues of $A$ are $0$, $1$, and $3$. Meanwhile eigenvalues of $B$ are $1$ and $3$. Why?
The result doesn't require $A$ symmetric, nor with zero row sum. I will write $a_k^*$ for the rows of $A$.
Suppose $\lambda$ is a nonzero eigenvalue with eigenvector $v$: so $Av=\lambda v$. We have $$ a_k^*v=\lambda v_k, \ \ \ k=1,\ldots,N. $$ In particular, $$ \lambda v_N=a_N^*v=(\sum_{k=1}^{N-1}\beta_ka_k^*)v=\sum_{k=1}^{N-1}\beta_k\lambda v_k. $$ Since $\lambda\ne0$, we get $$v_N=\sum_{k=1}^{N-1}\beta_k v_k.$$ Now, if $v_0=(v_1,\ldots,v_{N-1})^T$, for all $k=1,\ldots,N_1$ we have \begin{align} (Bv_0)_k&=\sum_{j=1}^{N-1}B_{kj}v_j=\sum_{j=1}^{N-1}(A_{kj}+\beta_jA_{kN})v_j =\sum_{j=1}^{N-1}A_{kj}v_j+A_{kN}\sum_{j=1}^{N-1}\beta_{j}v_j\\ \ \\ &=\sum_{j=1}^{N-1}A_{kj}v_j+A_{kN}v_N =\sum_{j=1}^{N}A_{kj}v_j\\ \ \\ &=(Av)_k=\lambda v_k. \end{align} Thus $Bv_0=\lambda v_0$, and $\lambda$ is an eigenvalue of $B$.
Conversely, if $\lambda$ is an eigenvalue of $B$, take $v_0$ with $Bv_0=\lambda v_0$, and form $v=(v_1,\ldots,v_{N-1},\sum_{j=1}^{N-1}\beta_jv_j)$. Then the same computation as above shows that $Av=\lambda v$.
In summary, the eigenvalues of $B$ are precisely the eigenvalues of $A$ with a zero removed from the list.