For $r,s>0$ \begin{align} \begin{bmatrix} X \\ Y \end{bmatrix}=A \begin{bmatrix} r \\ s \end{bmatrix} \end{align} where $A$ is a two by two real valued matrix. Assume $\lambda_1$ and $\lambda_2$ are the eigen values of $A$. Show \begin{align} \begin{bmatrix} X \\ Y \end{bmatrix}< \max(|\lambda_1|,|\lambda_2|) \begin{bmatrix} r \\ s \end{bmatrix} \end{align}
I know it comes from the power iteration theory, but it was not so clear to me. I tried breaking down A and writing the product as the sum of terms. But, it does not yield anything. Any idea?
You should first assume that $A$ is diagonalizable (over $\Bbb R$).
A counterexample with a non diagonalizable matrix would be $$A = \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}, \lambda_1 = \lambda_2 = 1, \ \begin{pmatrix} r \\ s \end{pmatrix} = \begin{pmatrix} 1 \\ 1\end{pmatrix}, \ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 2 \\ 1\end{pmatrix}$$
Also, the inequality is not necessarily strict. Take any $A$ together with $$\begin{pmatrix} r \\ s \end{pmatrix} = \begin{pmatrix} 0 \\ 0\end{pmatrix}, \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0\end{pmatrix}$$ for a counterexample.
But even those assumptions are not enough. Indeed, another counterexample would be $$A = \begin{pmatrix} 2 & 1 \\ 0 & 1\end{pmatrix}, \lambda_1 = 2, \lambda_2 = 1, \ \begin{pmatrix} r \\ s \end{pmatrix} = \begin{pmatrix} 1 \\ 1\end{pmatrix}, \ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 3 \\ 1\end{pmatrix}$$
Let's write everything and we will see where there is a problem.
Assume that $A$ is diagonalizable and let $\{v_1, v_2\}$ be a basis of eigenvectors, $Av_i = \lambda_i v_i$.
Write $\begin{pmatrix} r \\ s \end{pmatrix} = \alpha_1 v_1 + \alpha_2 v_2$. Then you have
$$A \begin{pmatrix} r \\ s \end{pmatrix} = A(\alpha_1 v_1 + \alpha_2 v_2) = \alpha_1 \lambda_1 v_1 + \alpha_2 \lambda_2 v_2 = \begin{pmatrix} x \\ y \end{pmatrix}$$ i.e. $$x = \alpha_1 \lambda_1 (v_1)_1 + \alpha_2 \lambda_2 (v_2)_1 \\ y = \alpha_1 \lambda_1 (v_1)_2 + \alpha_2 \lambda_2 (v_2)_2 $$
You would like to write $$x \leq \max \{ \lvert \lambda_1 \rvert, \lvert \lambda_2 \rvert \} (\alpha_1 (v_1)_1 + \alpha_2 (v_2)_1) = \max \{ \lvert \lambda_1 \rvert, \lvert \lambda_2 \rvert \} \cdot r \\ y \leq \max \{ \lvert \lambda_1 \rvert, \lvert \lambda_2 \rvert \} (\alpha_1 (v_1)_2 + \alpha_2 (v_2)_2) = \max \{ \lvert \lambda_1 \rvert, \lvert \lambda_2 \rvert \} \cdot s$$
This is true if, for example, you add the assumption $\alpha_1 (v_1)_1$, $\alpha_2 (v_2)_2$, $\alpha_1 (v_1)_2$, $\alpha_2 (v_2)_2$ are all $\geq 0$. This is not guaranteed by $r, s \gt 0$.
Usually, the result we have is : given a matrix (or operator) norm $\lvert \lvert \lvert \cdot \rvert \rvert \rvert$ induced by a vector norm $\lvert \lvert \cdot \rvert \rvert$ (see https://en.wikipedia.org/wiki/Matrix_norm ), then $$\lvert \lvert \begin{pmatrix} x \\ y \end{pmatrix} \rvert \rvert = \lvert \lvert A \begin{pmatrix} r \\ s \end{pmatrix} \rvert \rvert \leq \lvert \lvert \lvert A \rvert \rvert \rvert \cdot \lvert \lvert \begin{pmatrix} r \\ s \end{pmatrix} \rvert \rvert$$ and you don't need any assumptions on $A$ or both vectors.
For example, $\lvert \lvert \lvert A \rvert \rvert \rvert = \max \{ \lvert \lambda_1 \rvert , \lvert \lambda_2 \rvert \}$ if we consider the matrix norm $\lvert \lvert \lvert \cdot \rvert \rvert \rvert_2$ induced by the euclidean norm $\lvert \lvert \cdot \rvert \rvert_2$ and we assume $A$ is symmetric.