I was currently reading this article by Robert Schaback and Maryam Pazouki about bases for kernel-based spaces. To ask this question, I'll give a humble Introduction into the tools I'll use.
Let $K:\Omega\times\Omega\to\mathbb{R}$ be a symmetric positive definite Kernel for $\Omega\subseteq\mathbb{R}^d$ for some $\mathbb{N}\ni d>1$ and let $$\mathcal{H}\equiv\mathcal{H}(\Omega,K):=\overline{\mathbf{span}\{K(\cdot,x)\mid x\in\Omega\}}$$ be the so-called native Space of $K$. We want to reconstruct functions $f\in\mathcal{H}$ on a finite set $X:=\{x_1,\ldots,x_n\}\subset\Omega$, i.e. we want to find a function $s^*\in\mathcal{H}_X$ where $$\mathcal{H}_X=\mathbf{span}\{K(\cdot,x_j)\mid x_j\in X\}\subset\mathcal{H}$$ such that $f_X=s_X^*$ where $f_X:=(f(x_1),\ldots,f(x_n))^T\in\mathbb{R}^n$ and $s_X^*$ is defined analogously. Now we can use the approach $$s^*=\sum_{j=1}^nc_jK(\cdot,x_j)$$ which we can combine with the condition $s_X^*=f_X$ and rewrite it as a linear system of equations, $$A_{K,X}\cdot c=f_X,$$ where $A_{K,X}=(K(x_j,x_k))_{1\le j,k\le n}\in\mathbb{R}^{n\times n}$ is the Kernel-Matrix and $c=(c_1,\ldots,c_n)\in\mathbb{R}^n$. For a lot of reasons, it is useful to look at different datadependent bases $U(x):=(u_1(x),\ldots,u_n(x))$ for $\mathcal{H}_X$ than the Standard- or Kernel-Basis $$T(x):=(K(x,x_1),\ldots,K(x,x_n)).$$ To find new bases, we can simply do a change of bases, i.e. $$u_j(x)=\sum_{j=1}^n\tilde{c}_{jk}K(x,x_k)$$ which can be rewritten by $$U(x)=T(x)\cdot C_U,$$ where $C_U=(c_{jk})_{1\le j,k\le n}\in\mathbb{R}^{n\times n }$ is the Coefficient-Matrix.
Now in Theorem 5.2 in this article they want to build a bound for $|s^*(x)|^2$ to ensure stability. To do so, they found the bound $$\|U(x)\|_2^2\le K(x,x)\rho(G_U)\quad\text{for all } x\in\Omega,$$ where $\rho(G_U)$ is the spectral radius of the Gramian-Matrix of $U(x)$ with respect to the native space norm $\|\cdot\|_\mathcal{H}$.
And this is the part I don't understand. They made this estimation by using the inequality $$U(x)\cdot G_U^{-1}\cdot U^T(x)\le K(x,x),$$ whose derivation I did already understand.
Now my observation is that $$\|U(x)\|_2^2=(U(x),U(x))_2=U(x)\cdot U^T(x).$$ But how do they get the spectral radius of $G_U$ out of there? Maybe someone of you guys knows this. Thank you very much!
If I understood the question correctly, based on the equations provided, their result simply follows from a common inequality about quadratic forms and the eigenvalues of a matrix.
The Rayleigh quotient for a Hermitian matrix $M$ has a bound in terms of its eigenvalues. For all non-zero $v \in\mathbb{R}^n$, we have that: $$ \frac{v M v^T}{v v^T} \geq \lambda_{\min}(M)\,,$$ where $\lambda_{\min}$ denotes the minimum eigenvalue, and we assume row vectors, as in the question statement. For a positive-definite matrix $G_U$, we also have that: $$\lambda_{\min}(G_U^{-1}) = \frac{1}{\lambda_{\max}(G_U)} = \frac{1}{\rho(G_U)}.$$ The bound above then implies that: $$v G_U^{-1} v^T \geq \lambda_{\min}(G_U^{-1}) v v^T = \frac{1}{\rho(G_U)} v v^T = \frac{1}{\rho(G_U)} \rVert v \lVert_2^2, \quad \forall v \in \mathbb{R}^n\,.$$ Applying this to the original inequality about $U$ and $K$, we obtain their result: $$\frac{1}{\rho(G_U)} \lVert U(x)\rVert_2^2 \leq U(x) G_U^{-1} U(x)^T \leq K(x,x) \quad \therefore \quad \lVert U(x)\rVert_2^2 \leq K(x,x) \rho(G_U)\,.$$