Universal Approximation Theorems for Sparse Networks

69 Views Asked by At

It is well-known, as shown in Hornik's papers, that feed-forward neural networks are dense in the spaces of continuous functions (uniformly on compacts) and the spaces of $L^p$ functions (for suitable measures, $p$).

However, are there any proven universal approximation results for sparse neural network architectures?