If one wants to reconstruct a function $f$ which, we assume is an element of a Hilbertspace $(\mathcal{H}(\Omega,K),(\cdot,\cdot)_K)$ of functions $\Omega\to\mathbb{R}$ with a reproducing Kernel (a.k.a. symmetric positive definite Function) $K:\Omega\times\Omega\to\mathbb{R},\,\Omega\subset\mathbb{R}^d$, on a finite set $X:=\{x_1,\ldots,x_n\}\subset\Omega$ of points, one would choose the approach $$s=\sum_{j=1}^n c_j K(\cdot,x_j)\in\text{span}\{K(\cdot,x_j)\mid 1\le j\le n\}=:\mathcal{H}(X,K)\subset\mathcal{H}(\Omega,K).$$ Now combining this with the condition $s(x_j)=f(x_j)$ for every $1\le j\le n$ we get a linear system of equations.
It is well known that the consideration of the Kernelbasis $T:=\{K(\cdot,x_j)\mid 1\le j\le n\}$ for the reconstructionspace $\mathcal{H}(X,K)$ brings stability problems with it since the Kernelmatrix $A_{K,X}=(K(x_j,x_k))_{1\le j,k\le n}$ has ill conditions. Therefore, one might consider alternative bases $S=\{s_1,\ldots,s_n\}$, wich we can simply get by performing a basis transformation. For later usage, we denote $G_S$ as the $K$-Gramian matrix of this basis $S$. The approach now is $$s=\sum_{j=1}^n \tilde{c}_j s_j\in\mathcal{H}(X,K)$$ and we denote $\tilde{c}=(\tilde{c}_1,\ldots,\tilde{c}_n)$ and $S(x)=(s_1(x),\ldots,s_n(x))$.
In this article by Robert Schaback and Maryam Pazouki they proved $$|s^*(x)|^2\le \|S(x)\|_2^2\cdot\|\tilde{c}\|_2^2$$ and $$\|S(x)\|_2^2\le K(x,x)\rho(G_S),\quad\|\tilde{c}\|_2^2\le\|f\|_K^2\rho(G_S^{-1})$$ and therefore $$|s^*(x)|^2\le K(x,x)\|f\|_K^2\text{cond}_2(G_S)\quad\text{for every }x\in\Omega,$$ where $s^*\in\mathcal{H}(X,K)$ is the unique interpolant of $f\in\mathcal{H}(\Omega,K)$ on $X\subset\Omega$ and $\rho(G_S)$ is the spectral radius of $G_S$.
We can clearly see that $G_S=I$ would be quite nice for this, which would tell us to consider $K$-orthonmal basis. For $G_S=A_{K,X}$ we will get bad results, since $A_{K,X}$ has bad conditions. $G_S=A_{K,X}$ would be the case if we consider the Kernelbasis, i.e. $S=T$.
What I don't understand is the following. The interpolant $s^*\in\mathcal{H}(X,K)$ is unique. No matter what basis we use. So how is it possible, that we have a bound that depends on the choice of basis...? That wouldn't really make sense since it's basically the same function... In this article by Stefan Müller and Robert Schaback they proved the following for a special orthonormal basis (the newton-basis) but it is possible to prove it for general orthogonal bases.
If the reproducing kernel Hilbertspace $\mathcal{H}(\Omega,K)$ can be embedded into $\mathscr{C}(\Omega)$ by $\|g\|_{L^\infty(\Omega)}\le c\|g\|_K$ we get $$\sum_{j=1}^n |\tilde{c}_j|\cdot |s_j(x)|\le c\sqrt{n}\|f\|_K$$ for every $x\in\Omega$ where $S=\{s_1,\ldots,s_n\}$ is a $K$-orthonormal basis of the reconstructionspace $\mathcal{H}(X,K)$.
I am not sure why, but this somehow makes more sense to me, even though the uniqueness of the interpolant...
Since the interpolant is unique, I feel like the only valid things to look at in terms of stability and stuff are norms of the coefficient vector $\tilde{c}$. I don't get how you can look at absolute values of the interpolant because it is unique, so it shouldn't depend on the choice of basis...
Maybe someone of you guys can help me out here... Thank you very much in advance.