Is there any way or theory to prove that a neural network is stable? For instance, when I use an NN to learn from a dataset, how could I prove that the NN's learning will converge and give the adequate result? Thanks in advance for your help!
How to check the stability of a neural network?
1.7k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
When used in control theory, we use Lyapunov stability to show that the neural network’s weights are bounded. We can do this by using robust weight updates. So yes, in a control system you can in fact show that neural networks are stable in the sense of Lyapunov. Usually you do this by assuming there are “ideal” weights the neural network can take on, that are bounded, although these are not necessarily known. You also assume that when these ideal weights are being used, there is a bounded difference between the actual function, and the approximation by the neural network.
This field has been around for years. I suggest you read any neural network paper by Frank Lewis. Another resource would be the book “Neural Network Control of Robot Manipulators and Nonlinear Systems” where Lewis is one of the authors. It details how to use neural networks to approximate nonlinear functions that appear in control systems, and everything is based off Lyapunov stability techniques.
There's no way of knowing that. Most likely the cost function will have many different local minima, to ensuring convergence is kind of difficult, unless of course you are close to one of them, in which case is just a matter of using a small learning rate.
The typical argument authors use to show this is taking the neurons in the hidden layers and permuting them. This will not change the cost function, but it will change the location in parameter space. This implies that there are several local minima in your optimization problem.