I have data from a simulation of a certain function $F(x)$ for a discrete set of values $\{x_i\}$.
I know that
$$F(x) = A \ f(x) + B \ g(x)$$
where $A,B$ are (real, positive) constants and $f,g$ are unknown functions.
I also have the data for the $f$ and $g$ for the same set $\{x_i\}$ of values.
Usually, when fitting a set of points, I know the functional form and want to know the best value of the parameters. In my case, I only have a a set of points for $f$ and $g$ and the functional form is unknown.
How can I find the best values of $A$ and $B$?
You know all the values $F_i = F(x_i)$, $f_i = f(x_i)$ and $g_i=g(x_i)$ in the model $F_i = Af_i + Bg_i$ so we can find an estimate for $A$ and $B$ by least-squares fitting. That is we try to find the values of $A,B$ that minimize the sum of squared errors
$$S =\sum_i (F_i - A f_i - Bg_i)^2$$
The relevant equations / the way to derive them can be found on the web or in any decent textbook, but I'll add a derivation below for completeness. We have that $S$ is minimized when $\frac{\partial S}{\partial A} = 0$ and $\frac{\partial S}{\partial B} = 0$. This gives us the equations
$$-2\sum_i(F_i - Af_i - Bg_i)f_i = 0~~~\text{and}~~~-2\sum_i(F_i - Af_i - Bg_i)g_i = 0$$
Expanding the sum this gives
$$\sum F_i f_i = A\sum f_i^2 + B\sum f_ig_i~~~\text{and}~~~\sum F_ig_i = A\sum f_ig_i + B\sum g_i^2$$
which can be written in matrix form as $$\pmatrix{\sum f_i^2 & \sum f_i g_i \\\sum f_i g_i & \sum g_i^2} \pmatrix{A\\B} = \pmatrix{\sum F_i f_i\\\sum F_i g_i}$$
The matrix form is not really needed here, but it's quite convenient when we have more than $2$ variables. The solution can be found by applying your favorite method and reads
$$A = \frac{(\sum F_i g_i)(\sum f_i g_i) - (\sum F_i f_i)(\sum g_i^2)}{(\sum f_ig_i)^2-(\sum f_i^2)(\sum g_i^2)}$$ $$B = \frac{(\sum F_i f_i)(\sum f_i g_i) - (\sum F_i g_i)(\sum f_i^2)}{(\sum f_ig_i)^2-(\sum f_i^2)(\sum g_i^2)}$$
All the sums above extend over all your data-points so if you have $n$ $x_i$-values $\{x_1,x_2,\ldots,x_n\}$ then the sums extend over $i=1,2,\ldots,n$. Another way to write the solution, that is convenient for a numerical implementation, is to introduce the vectors $\vec{F} = (F_1,F_2,\ldots,F_n)$ and similary for $f$ and $g$. We can then write the solution using only vector-operations like the dot-product as
$$A = \frac{(\vec{F}\cdot \vec{g})(\vec{f}\cdot \vec{g}) - (\vec{F}\cdot\vec{f})(\vec{g}\cdot \vec{g})}{(\vec{f}\cdot\vec{g})^2-(\vec{f}\cdot\vec{f})(\vec{g}\cdot\vec{g})}$$ $$B = \frac{(\vec{F}\cdot \vec{f})(\vec{f}\cdot \vec{g}) - (\vec{F}\cdot\vec{g})(\vec{f}\cdot \vec{f})}{(\vec{f}\cdot\vec{g})^2-(\vec{f}\cdot\vec{f})(\vec{g}\cdot\vec{g})}$$