This question arises as a result of a close reading of a proof in the following paper:
Buckwar, E. & Winkler, R. Multistep methods for SDEs and their application to problems with small noise. SIAM J. Numer. Anal. 44, 779–803 (2006).
At one part of the proof we have the following:
A crucial point for the subsequent calculations is to find a scalar product inducing a matrix norm such that this norm of the matrix A is less than or equal to 1. This is possible if the eigenvalues of the Frobenius matrix $\mathcal{A}$ lie inside the unit circle of the complex plane and are simple if their modulus is equal to 1. The eigenvalues of $\mathcal{A}$ are the roots of the characteristic polynomial $\rho$ and due to the assumption that Dahlquist’s root condition is satisfied they have the required property. Then there exists a nonsingular matrix $\mathcal{C}$ with a block structure like A such that $\mathcal{||C^{-1}AC||_2} \leq 1$, where $||·||_2$ denotes the spectral matrix norm that is induced by the Euclidian vector norm in $\mathbb{R}^{k×n}$. We can thus choose a scalar product for $\mathcal{X,Y} \in \mathbb{R}^{k×n}$ as $$\langle\mathcal{X} , \mathcal{Y}\rangle_\ast = \langle\mathcal{C^{-1}X} , \mathcal{C^{-1}Y}\rangle_2$$ and then have $|.|_\ast$ as the induced vector norm on $\mathbb{R}^{k×n}$ and $||·||_\ast$ as the induced matrix norm with $||\mathcal{A}||_{\ast} =||\mathcal{C^{-1}AC}||_2 \leq 1$
The key sentence is "Then there exists a nonsingular matrix $\mathcal{C}$..." Why is this true?
The matrix $\mathcal{A}$ is a Frobenius matrix of the following form (actually a Frobenius matrix $\otimes$ an identity matrix but this is a detail).
$$\left(\begin{array}{cccc} -a_0\mathbb{I}_d & -a_1\mathbb{I}_d & \cdots & -a_{k-1}\mathbb{I}_d \\ \mathbb{I}_d & 0 & \cdots & 0 \\ & \hspace{-10mm} \ddots & \hspace{-4mm} \ddots & \\ & \hspace{1mm}\ddots & & \hspace{-16mm} \ddots\\ 0 & & \mathbb{I}_d & 0 \end{array} \right)$$
This matrix arises as the companion matrix to an iterative IVP solver, which is assumed to satisfy Dahlquist's root condition. This means that all its eigenvalues have modulus strictly less than $1$, or equal to $1$ only if they are simple.
This matrix is defective and hence not diagonisable. If it were, the statement would be true immediately since we would have $\mathcal{A} = \mathcal{CDC^{-1}}$ for some diagonal $\mathcal{D} = \mathcal{C^{-1}AC}$ which due to the nature of the eigenvalues clearly has matrix $2$-norm less than or equal to $1$.
A couple of other references,
(A) Hairer Norsett Wanner "Solving Ordinary Differential Equations" Chapter III Lemma 4.4
(B) Plato "Concise Numerical Mathematics" Lemma 8.15
solving a related but not identical problem, proceed by finding a matrix similar to $\mathcal{A}$ in Jordan Normal Form, and bounding the max-norm of this Jordan matrix. But this trick doesn't work for the $2$-norm (at least, not that I can see) which is required here, because later in the proof the norm is required to be one induced by an inner product, as stated in the opening sentence of the screenshot.
Can anyone help me argue that such a $\mathcal{C}$ exists?
Using similarity, it suffices to construct such a matrix for a Jordan block $$ J = \pmatrix{ \lambda & 1 \\ & \ddots & \ddots& \\ && \ddots & 1 \\ &&&\lambda}.$$ Due to the assumptions on the eigenvalues, $|\lambda|<1$. For $t>0$ define the diagonal matrix $D=diag(t,t^2,\dots,t^n)$. Then $$ DJD^{-1} = \pmatrix{ \lambda & t^{-1} \\ & \ddots & \ddots& \\ && \ddots & t^{-1} \\ &&&\lambda}. $$ Then for some $x$, it holds $$\|DJD^{-1}x\|_2 \le \|\lambda x \|_2 + t^{-1} \|x\|_2 = (|\lambda| + t^{-1})\|x\|_2.$$ Hence for $t>0$ large enough, it holds $\|DJD^{-1}\|_2 \le \frac{|\lambda|+1}2<1$.
The idea is from the proof of Lemma 5.6.10 in Horn-Johnson.