Numerical Function Fitting

121 Views Asked by At

In summary I have a program that numerically integrates a function with multiple parameters, and would like to fit it against some data. Now, if I had the analytic form of the function I could of course find the minimum of my fit criteria (say Chi Square.) So, for the 3 parameters I care about, I could simply try a bunch of values for each parameter, but that could take forever! The integration takes at least 1.2s (bare minimum), and if I were to try 10 values for each parameter... More importantly however, this isn't smart! Is there a robust way to look for minimums in a non-analytic evaluation?

Edit: I really liked and accepted @RossMillikan's answer, but felt I should expand on this a bit more since it was poorly written and the problem+response may prove useful to others.

The issue I was having was two-fold: getting hung up on not having a "mathematical" formula to evaluate for a fitting routine to utilize and the function I had constructed taking too long for it to be run 100's of times over.

For the first one, the method by which a function spits out results doesn't matter. If your fit-function does a numerical integration, approximates a derivative, or any sort of non-trivial computation (i.e. evaluating a polynomial) to evaluate a function, it's just as viable as the trivial case. What matters is that it's taking inputs and giving outputs. In mathematics, and in programming (for the most part), that's what functions do.

For the second, if it takes long, well then it takes long. That's not to say it can't be improved though. Depending on the problem one may be able to crunch something different after rewriting it. E.g. evaluating $\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\tan\beta}$ probably doesn't take that long, but if you need to run it $10^9$ times, that might take an appreciable amount of time. $\tan(\alpha+\beta)$ might take an order of magnitude less though to run that many times and is equivalent. Approximations could also help too. Say you have some function $ax+(bx)^{234}$ where $5<x<10$ and $b=0.1$. For those values, you can probably drop the second half at no cost as that's an expensive comp that'll result in effectively zero anyways.

2

There are 2 best solutions below

0
On BEST ANSWER

Any numerical analysis book will have a section on multidimensional minimization. There are a number of methods. The fact that you don't have a functional form is not important. These methods just call your routine to evaluate the function (and maybe the gradient, if you can) at various points. They have clever ways to choose the points to evaluate the function at. Sections 10.4 through 10.7 in Numerical Recipes has routines in C. Others will have other favorites.

0
On

Ever heard of the Nelder and Mead Simplex Method for Function Minimization? Give it a try. Seems to me like the thing you are looking for. Here is a Python implementation. But you can find it also in C++, Fortran and other languages...if you care to Google around.