Factorio is a game where you construct a factory. Pipes are used to transport various liquids. The developers have had long-standing issues with the simulation of fluid in pipes. For instance, flow in some directions is preferential to others.

They detail their efforts in this blog post: https://factorio.com/blog/post/fff-260
I gave some thought about how I would solve this sort of system in a way that's fast and fit for gameplay purposes (So modelling Navier Stokes or even momentum isn't necessary), and I figured it would best be described as a system of linear equations, where the volume of each pipe segment in the level is its own variable, and its rate of change of volume is dependent on the volume of the segment and its neighbours in the previous timestep. This ought to solve any directionality problems.
Then I remembered this sort of problem has analytical solutions. Without outside peturbation, I believe it is a non-homogenous linear system where the sources and sinks have a constant rate $kt$. I may be wrong here, but from a few searches and scribblings, the solutions would be a sum of $ e^{-\lambda t} + te^{-\lambda t}$ type terms.
My question is:
In a simulation such as this, is there any benefit to using an analytical solution instead of a numerical one?
Further information:
- There is no upper limit to the number of pipes, indeed some megabases can have more than 10,000.
- Flow usually operates at a sort of steady state, but pipes (and other sources and sinks) can be added and destroyed at any time.
- Fluids need to be updated multiple times per second.
My hunch is that despite the existence of an analytical solution, there is still some very heavy computation involved (i.e. finding the eigenvalues from a 10,000 by 10,000 sparse matrix), which would presumably have to be repeated whenever there was a change in the system. However, I suspect that this type of large linear system pops up all the time in real life physics/engineering and that there are a multitude of well-understood approaches to solving them.
Also, I am no mathematician, so please correct anything here that is obvious nonsense.
It comes down to the computational complexity of the simulation and the analytic solution. In a computer, there is no "$e^x$ chip" which can easily calculate exponentials of any size. Assuming that $x$ here is a double, calculating $e^x$ essentially boils down to evaluating the Taylor series until some stopping condition. Bigger numbers are going to require more terms in the series to be sufficiently accurate, and so we represent that growth of computational time with respect to size of inputs with $O(f(x))$ for some function $f$.
Similarly, numerical methods of approximation are going to take a certain amount of time to calculate, also represented by some $O(g(x))$. To decide which one would be better, you can compare the time complexity $O(f(x))$ vs. $O(g(x))$.
This is an oversimplification, as there are multiple algorithms to compute both exponentials and approximate values in the simulation at each step, each with different time and space complexity. As well, space complexity is important for systems with limited memory constraints, among other factors.
In short, you compare these two approaches various algorithms the same way you compare any other algorithms in computational science.