Let A be positive definite and not symmetric (edit: and real).
Why is $I - \alpha A$ a contraction for sufficiently small $\alpha$?
I see why this is the case if A is symmetric since it will have an eigendecomposition and give:
$Q(I - \alpha \Lambda)Q^T$
Where $\Lambda$ is diagonal with positive eigenvalues on the diagonal. But what can be said if A if not symmetric? Is it valid to use a singular value decomposition instead and say that the singular values must be positive and somehow argue that way?
The question comes from reading Reinforcement Learning by Sutton & Barto and the boxed text Proof of Convergence of Linear TD(0) and also Reinforcement Learning: Algorithms and Convergence by Heitzinger (Theorem 6.1).
The eigenvalues of $I-\alpha A$ are in the form $1-\alpha\lambda$ for $\lambda\in\Lambda(A)$. From the definition of positive definite matrix, $Re(\lambda)>0$ and $$ |1-\alpha\lambda|^2 = (1-\alpha Re(\lambda))^2 + \alpha^2Im(\lambda)^2 = 1 - 2\alpha Re(\lambda) + \alpha^2|\lambda|^2 $$ that is less than 1 whenever $$\alpha < 2 Re(\lambda)/|\lambda|^2$$ so you just need to impose $$ \alpha < 2 \inf_\lambda Re(\lambda)/|\lambda|^2. $$ Notice that $Re(\lambda)/|\lambda|^2$ is always positive thanks to $Re(\lambda)>0$.