The problem arose of choosing a direction for studying and working within the framework of this topic: neural networks as universal approximators for solutions of partial differential equations.
At the moment, as a rule, neural networks are used to obtain more accurate numerical approximations, stable, compared to classical difference schemes. Today, there are many different libraries for training models and predicting solutions. My problem is mainly that I want to try, based on already selected numerical approximations, using neural networks, to obtain a number of analytical solutions.
The question is: is it possible in principle to obtain a similar result, and are there any practical problems described in scientific articles in precisely this context (not refining numerical solutions using neural networks, but approximating already obtained solutions and obtaining analytical expression). That is, maybe there is some small problem that has important practical significance, the solution of which can be presented in this form?
In fact, my question is quite targeted, since I am not asking for an overview of the available methods for solving partial differential equations using neural networks, but rather solving the inverse problem.
In this case, I am more interested in the mathematical formulation of the problem and the mathematical aspect of its solution. If possible, please provide an answer, links to similar topics, and possibly literature.
Thank you very much in advance!
Finding a solution to Partial Differential Equations (PDEs) using Neural Networks is often referred to as Physics-Informed Neural Networks (PINNs) or Operator Learning. Conceptually, solving a PDE can be expressed as: $$ \textbf{Solution} = \textbf{Equation}\big(\textbf{Initial Condition}\big), $$ where the bracket can also encompass boundary conditions or any other pertinent conditions for learning.
PINNs focus on learning either the solution to the equation (referred to as the "forward problem") or the component of governing equation like hidden parameter (mass, viscosity, etc.) (referred to as the "backward problem"), assuming that the other two components (initial/boundary conditions and the solution/equation) are known.
That is, in this framework, one could either train a neural network with prior knowledge of the governing equation and a dataset comprising initial conditions and corresponding solutions (forward's), or infer an unknown governing equation from the data (backward's). Your inquiry appears to pertain more closely to the backward problem.
Deriving an analytical equation in its purest form is generally not feasible, to the best of my understanding. While it's true that an analytical formulation of a neural network does exist—as one can trace the computational process and organize it systematically—such a formulation may not be the primary objective. This is because it is not simple and it may not convey valuable insights due to its inherent complexity.
There are, however, lines of work that aim to approximate equations through symbolic representation.
You might want to see:
https://www.nature.com/articles/s41467-021-26434-1 https://arxiv.org/abs/2207.00529