Taylor expansion of a Neural Network

1.2k Views Asked by At

Let $f(\mathbf{x})$ represent a trained neural network with ReLU() activation functions and input $\mathbf{x} \in \mathbb{R}^d$ ($\mathbf{x}$ could be, for example, an image with dimensionality $d$). Let's assume that if $f(\mathbf{x}) > 0$, the network recognized the input and if $f(\mathbf{x}) < 0$, the network does not recognize or understand the input.

The series expansion of the network $f(\mathbf{x})$ can be written as:

$$f(\mathbf{x}) = f(\mathbf{x_0})+(\mathbf{x}-\mathbf{x_0})\cdot \nabla f(\mathbf{x_0}) + (\mathbf{x}-\mathbf{x_0})\cdot \nabla^2 f(\mathbf{x_0})\cdot (\mathbf{x}-\mathbf{x_0}) + \dots$$

I would like to know, what the terms of the Taylor Series (especially the first three) say about the networks behavior and what $\mathbf{x_0}$ would represent in such a case. What for example would be a good choice for $\mathbf{x_0}$?