How to build a mathematical formula

17.2k Views Asked by At

We can see mathematical formulas, graphs in thesis paper. Together those are used to prove some theorem. Graphs are generated based on some test using those formula.

My question is how those formulas are built? Is there any scientific procedure to build a mathematical formula for an experiment?

Lets say, I am saying that (a+b)/2 provides a good result for an experiment. Another person shows that (a+root(b))/2 provides better result than the previous one using a graph. The question is, how does he know that root(b) will provide better result? why not 2*b or 2*root(b)? Has he checked 2*b, 2*root(b) too?

1

There are 1 best solutions below

0
On BEST ANSWER

Even with the edits, it's a little difficult to know how deep or complete an answer to give, because this is a very open-ended question. It's also more of a question about science than mathematics, but I'll give my two cents anyway.

First, it's best if the model originates with some first principles, and the experiment is only used to verify. At the very least, one would want to use some known details about the structure of the phenomenon one is trying to model, in order to inform the types of allowable mathematical models. Is there a natural periodicity to the phenomenon? Should we require/expect it to be "continuous" in some sense? How continuous? Would it be more helpful to allow "jumps" in the model, even if not fully accurate? Without something like that, there are just too many mathematical models which would be equally compatible with the data you collect ( even if you collect a LOT ). If you really only have a few pieces of data and you really want to make a choice between models, the only criteria you really have to go on would be simplicity and computability.

I'm worried that I'm not quite answering your question, though, because there are reasons to select $2b$ instead of $2\sqrt{b}$ in some cases. In the case of $2b$, the growth is linear, so that doubling $b$ should double the result and similarly with tripling it. Presumably you could see that from, for example, a graph. Other simple functions have recognizable shapes and features as well, and we can learn to piece them together to make other "shapes" and "features" as well. However, without any basis in the fundamentals of the problem, it would be quite a logical leap to extrapolate predictions from the fitted data.

Something else worth mentioning is that there are out-of-the-box methodologies for determining the "best" model. For deterministic models, this would be something like Fourier analysis, polynomial fitting or regression analysis, where the choice of method ( and the choices within that method ) should be informed by known properties. For stochastic phenomenon the Box-Jenkins method is quite popular, and one would use some sort of heuristic measure like Akaike information criterion to make sure not to "over determine" the system, ie. since if I use enough variables I can fit any data perfectly how to I determine the right number of variables to use? I would say these represent "classic" or "established" methodologies for approaching a general modelling problem, and are certainly not silver bullets that work anywhere and everywhere, and there are likely better methods .