Could anybody define the "Numerical Experiment" and provide some examples?
Does solving a problem using AI/Machine learning such as Artificial Neural Networks (ANN) or heuristic algorithm Genetic Algorithm (GA) count as a numerical experiment?
Thanks
In the scientific sense, as I understand it, you make an experiment for a numerical method if
Then you carry out the numerical method on the test problem and examine if the actual errors behave according to the a-priori expectations. If that is not the case you can now research what the cause of it is. Usually this only counts if it is carried out over a larger selection of test problems (and related parameters).
Numerical solution of ODE
Let's start with a topic where these ideas are quite clear-cut, numerical methods, Runge-Kutta and others, with a fixed step-size, applied to scalar or low-dimensional ODE.
There are ODE classes with known exact solutions, like linear DE with constant coefficients (which are also theory-leading) or any other named class of first or second order ODE. For other classes of ODE one can ensure the knowledge of at least one exact solution via the method of manufactured solutions. This gives test problems with exact solutions in a wide range of ODE "structures".
The method error can be computed to any precision for the method step. The propagation of the local errors to the global errors can be simulated, at least in the leading terms, via an auxiliary ODE system (see https://scicomp.stackexchange.com/a/36629/6839)
The influence of floating point errors can be estimated, highlighting critical places where computations in higher precision can give a measurable improvement (see https://scicomp.stackexchange.com/a/37427/6839).
Now switching to PDE and method-of-lines introduces a choice of what one would like to consider the exact solution, the solution of the PDE or the exact solution of the space-discretized ODE system. The model for the method error needs to be adapted to this.
The picture becomes less clear when introducing heuristics in the form of adaptive step size and the corresponding step size controllers. For instance only in the early embedded methods like the Fehlberg methods was the error estimate really an error estimate. In the more modern methods it is more like an error size guidance, at times it can be quantitatively very wrong.
The challenge is now to find useful observables like the error density or error per unit step, step rejection rates, etc. then develop testable hypotheses about them and measure them.
Machine learning
One can construct partial analogues to the above observations. For simple neural networks, few layers and few nodes, and standardized test patterns like the checkerboard or clearly separated clusters, one can calculate or reliably compute what the exact solution is. Then one can test the specific learning algorithm against that.
As you advance from that stage to more complex networks you will again lose control of the details. Thus again the search for useful observables for which one can predict a qualitative or better a quantitative behavior, making hypotheses about them and then actually measure them.
Per the scientific method one would prefer if the behavior of complex objects can be inferred from the behavior of its more simple parts, but such isolation might be difficult. As I recently learned, this approach has so far been doomed, any structural approach was and is significantly surpassed by using larger general networks and more computing power.