The utility of kernel methods like RKHS in machine learning

160 Views Asked by At

In machine learning framework, kernel methods are widely used to find the close-form solution of a optimization problem, which restricts the solution in an RKHS. However, it really puzzles me that whether RKHS works in practice, that is to say, whether the assumption is valid that the solution lies in an RKHS. I know one of the most classic examples to use kernel methods is support vector machine(SVM) and I recently have worked on a sampling algorithm, Stein Variational Gradient Descent(SVGD, Liu et al 2016), which also utilize RKHS to find the optimal velocity field. Considering that the functions in an RKHS is quite limited, I do think kernel methods sacrifice too much in order to obtain computation efficiency. Can any one explain whether my doubts on the assumption of RKHS is reasonable or not? Maybe it is better with some specific examples.

1

There are 1 best solutions below

1
On

The paper Universal Kernels investigate conditions on the features of a continuous kernel so that it may approximate an arbitrary continuous target function uniformly on any compact subset of the input space. A number of concrete examples are given.

You can find some related arxiv file searching for "\(\mathcal{H}_K\) universal kernel " on SearchOnMath, for instance.