Approximation of positive definite functions by neural networks

545 Views Asked by At

Bochner's theorem shows that probability measures $\mu$ are linked with positive definite functions via Fourier transform:

$f(k) = \int_{\mathbb{R}^n} e^{-2 \pi i k x} \,d\mu(x)$

Currently, probability measures can be very well approximated by various generative models based on neural networks.

What about positive definite functions? Is there any natural approximation tool, based on neural network?

Udp: I can that question myself but this answer does not satisfy me.

Let us assume that $f(k)$ is real-valued, i.e. $\mu$ is symmetric w.r.t. to origin. Then, it is natural to approximate $f$ by the single layer neural network with cosine activation fucntion:

$f(x) = \sum_{i=1}^N \theta_i cos(\omega_i^T x)$

where $\theta_i\geq 0$.

My question is: are there any more powerfull (many-layer) networks, for this kind of functions?