I think neural networks can be viewed as being merely a large parameterized family of functions. There are enough parameters that neural networks are "universal approximators", and yet neural networks are simple enough that they can be trained efficiently.
Are there any other parameterized families of functions, besides the neural network family, which have similar expressiveness and simplicity and so could be useful in machine learning as a substitute or alternative for neural networks?
An important family of distributions in machine learning is an exponential family. A pdf or pmf $p(x|\theta)$ for $x=(x_1,...,x_m) \in \cal X^{m}$ and $\theta \in \Theta \subseteq \mathbb{R}^{d}$ is in the exponential family if it's in the form: \begin{equation} p(x|\theta) = \frac{1}{Z(\theta)}h(x)\exp[\theta^{T}\phi(x)] = h(x)\exp[\theta^{T}\phi(x) - A(\theta)] \end{equation} where $\theta$ represents natural parameters, $\phi(x)$ is a vector of sufficient statistics, and $Z(\theta)$ is the partition function. The exponential family is important for many reasons:
Some of the examples of exponential family distributions are Bernoulli, Dirichlet, Gaussian, Exponential distributions. Some non-examples include uniform and Student-T distributions.