Can we use numerical methods to get a symbolic/analytical solution of a PDE?

330 Views Asked by At

I know the basic differences of numerical and analytical (symbolic) solutions to differential and complicated algebraic equations. Everyone knows that numerical solutions can be obtained even when an analytical solution can't be obtained. However, sometimes analytical solutions even if cannot be found would be preferred in many engineering and scientific applications, as they often give a physical insight to the mathematical description of a system that is not easy to get with a numerical solution. An example where it could be useful would be to see what inputs/parameters are influence the output the most in a model of a multi-input multi-output (MIMO) system. (See When are analytical solutions preferred over numerical solutions in practical problems?).

If indeed it was required, could it be possible to use numerical techniques to help in obtaining an analytical solution? Is it even remotely possible, or is what I am asking meaningless? Keep in mind that my knowledge of math is not advanced by any means, so am I missing something important here?

2

There are 2 best solutions below

2
On

Became a bit too large for a comment.

  1. We can use numerical methods to find a representation from which we can guess the function if we have some way to test the candidates we find. For example we can do numerical approximation on some grid from which we then can try and do regression vs previously known functions or for example power or Fourier series expansions.
  2. But we also have matrix representation theory of groups which allow us to represent functions as matrices. For example vector of coefficient for polynomials or coefficients for sines and cosines the differential operator becomes a matrix working on the coefficients. An example of differential equation that would be possible to solve to get a polynomial or truncated power series.

EDIT, Example:

The set of functions $\{\sin(kx),\cos(kx)\}$, derivatives are $\{k\cos(kx),-k\sin(kx)\}$, and can be expressed:

$${\bf Dc} = \left[\begin{array}{cc}0&k\\-k&0\end{array}\right]{\bf c}$$ if the $\bf c$ is the coefficient vector for $[\sin(kx),\cos(kx)]^T$

An example:

$$\frac{\partial^3 f}{\partial x^3} + f = g \Leftrightarrow ({\bf D}^3 + {\bf I}){\bf v} = {\bf d}$$

$$\cases{{\bf v }\text{ is the vector containing coefficients for }f \\{\bf d}\text{ is the vector containing coefficients for }g\\ {\bf D}\text{ is the differentiation matrix } \\ {\bf I} \text{ is the unit matrix}}$$

2
On

I'm not aware of any way to infer an exact analytic solution from a numerical solution. If an approximate analytic solution is sufficient, there are many ways, especially if one allows a piecewise solution (i.e a collection of analytic functions, each restricted to representing a distinct portion of the whole solution, assembled like a quilt). This is where you start encountering terms like 'spline', 'NURB', 'Chebychev polynomial', 'Response-Surface Model', etc. [the devising of approximation techniques is a pretty big research area]