After my semester at Umich my mathematics professor issued me an abundance of problems to keep my head in the game during the summer. One of the questions which threw me off was finding the normal equations that minimize the approximation error. I must admit he never taught normal equations before or least squares for that matter. I get the basic understanding of least squares and normal equations. My issue is the theoretical aspect of applying the formula to variables instead of numbers. Can someone explain to me in layman's term, how do i solve: the normal equations for $$g^* = \left[g_1^*,g_2^*\right]$$ that minimize the approximation error in the least square sense
$$g^* = \arg\max_g \left(\sum_{n=0}^N [f_n -F(x_n,g)]^2\right)$$ in the case of:
$$F(x,g) = g_1 e^{g_2x} $$
We are interested to find $g_1$ and $g_2$ that maximizes
$$\sum_{n=0}^N \left[f_n-F(x_n,g)\right]^2=\sum_{n=0}^N \left[f_n-g_1\exp(g_2x_n)\right]^2.$$
To do so, I would take partial differentiation wrt $g_1$ and partial differentiation wrt $g_2$ and equate them to zero.
I obtain:
$$\sum_{n=0}^N \left[f_n-g_1\exp(g_2x_n)\right] \exp(g_2x_n)=0$$
and
$$\sum_{n=0}^N \left[f_n-g_1\exp(g_2x_n)\right]g_1x_n \exp(g_2x_n)=0$$
In particular, from the first eqution, we have
$$\sum_{n=0}^N f_n \exp(g_2x_n)=\sum_{n=0}^Ng_1\exp(2g_2x_n)$$
and $$g_1=\frac{\sum_{n=0}^N f_n \exp(g_2x_n)}{\sum_{n=0}^N\exp(2g_2x_n)}.$$
Similarly, assumingly that $g_1 \neq 0$, from the second equation, we have
$$g_1=\frac{\sum_{n=0}^N f_nx_n \exp(g_2x_n)}{\sum_{n=0}^Nx_n\exp(2g_2x_n)}.$$