There are lots of guides out there for current best practices for using neural networks for classification tasks. However, these guides don't always apply to function approximation.
What are some of the current best practices for network design and training for function approximation?
There's a good book by Kevin Murphy on ML techniques: Machine Learning: A Probabilistic Perspective.
There's a trade off between computational complexity, training time and accuracy for neural networks and deep neural networks. They are good for approximating high dimensional input spaces and the accuracy depends upon the class of function you need to approximate. If you can sample the function at discrete points the best linear approximation with uncertainty bounds is found using a Gaussian Process model. It represents a neural network with an infinite number of nodes and can approximate any function within an input range. Otherwise if you know features of the function that are likely to be most useful at approximating the function you can choose those as inputs to a lower dimensional network. See also Reproducing Kernel Hilbert Space functions, used to approximate functions as linear combinations of feature basis using the Representer Theorem for a set of sample points.