When I read the book Iterative Methods for Sparse Linear Systems, Second Edition, I get stuck with the following proof. The yellow highlight parts are the positions I have trouble to understand. Something maybe really obvious. But for me, it is not :(.

Technically, the first highlighted remark is not correct because it equates row vectors and columns vectors. However, the statement which is correct--and this is the one the author uses--is that $Ax=\lambda x$ iff $A^{H}x=\overline{\lambda}x$. That statement is correct.
Next, the use of the term elementary zero divisions is not defined. What he is referring to here is the Jordan canonical form. For a root $\lambda$ of the characteristic polynomial $p$ of $A$, the restriction of $A$ to a Jordan block associated with $\lambda$ has the property that there is a least $k$ for which $(A-\lambda I)^{k}x=0$ for the vectors $x$ associated with such a block, and $k$ is, of course, equal to the size of the block. The block has size 1 if $(A-\lambda I)x=0$ for the vectors $x$ associated with the block. By "no elementary zero divisor" the author means that $(A-\lambda I)^{k}x=0$ for some $k > 0$ iff $(A-\lambda I)x=0$, which is an interesting property of normal operators that forces all Jordan blocks to be diagonal blocks of size 1, and ultimately forces the Jordan form to be diagonal, which means a full basis of eigenvectors.
What the author does to prove to prove "no zero divisors" is to assume $(A-\lambda I)^{2}x=0$ for some $x$, and then proceeds to show that $(A-\lambda I)x=0$. And that's enough to do the job because $(A-\lambda I)^{k}x=0$ for $k > 2$ can be written as $(A-\lambda I)^{2}(A-\lambda I)^{k-2}x=0$ which then implies $(A-\lambda I)(A-\lambda I)^{k-2}x=0$; so you can repeat that argument as many times as you need to reduce to $(A-\lambda I)x=0$. And that's what eliminates all Jordan blocks except blocks of size 1, and gives you a diagonal Jordan form.
To prove the case for $k=2$, the author assumes $(A-\lambda_{i} I)^{2}u_{i}=0$ and sets $v_{i}=(A-\lambda_{i} I)u_{i}$, which is then an eigenvector with eigenvalue $\lambda_{i}$. The author shows that $v_{i}=0$ in order to obtain the desired result. First note that $Au_{i}=\lambda_{i} u_{i}+v_{i}$ where $Av_{i}=\lambda v_{i}$ and, hence, also $A^{H}v_{i}=\overline{\lambda_{i}}v_{i}$. Putting these pieces together yields $$ \begin{align} & (Au_{i},v_{i})=\lambda_{i}(u_{i},v_{i})+(v_{i},v_{i}) \\ & \;\;\;|| \\ & (u_{i},A^{H}v_{i}) = \lambda_{i}(u_{i},v_{i}). \end{align} $$ The inevitable conclusion is $(v_{i},v_{i})=0$ or $0=v_{i}=(A-\lambda_{i}I)u_{i}$. In other words, $$ (A-\lambda_{i})^{2}v_{i} = 0 \implies (A-\lambda_{i}I)v_{i}=0. $$