Forgive any lack of rigor.
If you have a countable orthonormal basis $B$ for a Hilbert space $H$ , then any function $f \in H$ can be expressed as $$ f(t) = \sum\limits_{g \, \in \, B} \langle f, \, g \rangle \, g(t) $$ where $\langle f, g \rangle$ might be something like $$ \int_{t_1}^{t_2}f(t) \, g(t) \, \mathrm{d}t $$
So essentialy $ (a_1, a_2, \dots, a_{N-1})$ is the coordinate vector of $f$ for basis $B$.
Now my question is: are integral transforms the continuous version of this? Can the kernel function of an integral transform be viewed as a sort of basis? For example suppose you do have a basis $B' = \{ K_u(t) : u \in (u_1, u_2) \} $, and a function $f'$. Can you use a similar expression for calculating the coordinate vector of $f'$ relative to $B'$ as with $f$? If so what conditions does $B'$ have to satisfy?
Here's an example of the finite case using sines and cosines for the basis - http://en.wikipedia.org/wiki/Fourier_series#Hilbert_space_interpretation
Thanks for helping.
No, it is not really related to the concept of basis. The strongest analogy I know to a basis is via the singular value decomposition for compact operators.