I have a function that I'm trying to model using an exponential function and I'm trying to determine the constants for the exponential. I know I could optimize it using trial-and-error in R or another language, but I'd like to learn an analytic solution.
I figured that minimizing mean squared error would be the way to go about this, so I have:
$$\underset{r,k}{\operatorname{argmin}}\sum_{t=0}^T (s(t)-\hat{s}(t\mid r,k))^2$$
Where $s(t)$ is the function that I'm trying to model and $\hat{s}(t)$ is $(1+r)^{t+k}$ .
The only constraint I could come up with was $r>0$. I suppose I could also assume $k>0$ for now.
I learned about Kuhn-Tucker constraints as an extension to Lagrange constraints (which I already know) but I wasn't able to solve it.
Am I even going about this the right way? If I am, how can I solve this problem?
Thanks in advance!
Your setup is fine. This sort of problem will not (usually) have an analytic solution. You have a two-dimensional non-linear minimization problem. There are many numeric routines that can solve this in libraries, and they are discussed in any numerical analysis text. They really consist of informed trial and error, where the informed part comes from keeping track of past trials to build up information about the error function.