Need method to fit specific function to measured data

54 Views Asked by At

I have a function

$$ f(x, a, b, c, d)=\frac{a+b \times x}{c+d \times x}$$

where for a set of different x there are measurement results corresponding to $f(x, a, b, c, d)$. It is the parameters $a, b, c,$ and $d$ I would like to fit.

There seem to be ways of solving this problem in Matlab, Gnu Octave, Python, and Julia for example, but I would like to implement specific code.
This is to understand how such fitting is done and also to be able to implement this without having to run any other software. If anyone can suggest a method for doing this in code (e.g., C, Python or Basic) or point out where such a method is discussed, I would be very thankful.

The function is to be used for finding corrections coefficients of a microwave vector network analyzer. I have the analyzer, but not the math...

Regards,
Staffan

1

There are 1 best solutions below

0
On BEST ANSWER

We can "linearize" this fitting. Calling $b = \lambda a,c = \mu a,d=\eta a$ we have

$$ y = \frac{1+\lambda x}{\mu + \eta x} $$

so the deviation can be computed as

$$ \delta_k = \mu y_k + \eta y_k x_k -\lambda x_k-1 $$

and thus the problem is

$$ \min_{\lambda,\mu,\eta}\sum_k \delta_k^2 = \min_{\lambda,\mu,\eta}\sum_k\left(\mu y_k + \eta y_k x_k -\lambda x_k-1\right)^2 $$

so the minimum is attained at the solution for the linear system

$$ \cases{ -2\sum_k\left(\mu y_k + \eta y_k x_k -\lambda x_k-1\right)x_k = 0\\ 2\sum_k\left(\mu y_k + \eta y_k x_k -\lambda x_k-1\right)y_k = 0\\ 2\sum_k\left(\mu y_k + \eta y_k x_k -\lambda x_k-1\right)x_ky_k = 0\\ } $$

or

$$ \left(\matrix{\sum_k x_k y_k&\sum_k x_k^2y_k&-\sum_k x_k^2\\ \sum_k y_k^2&\sum_kx_ky_k^2&-\sum_kx_ky_k\\ \sum_k x_ky_k^2&\sum_kx_k^2y_k^2&-\sum_lx_k^2y_k}\right)\left(\matrix{\mu \\ \eta\\ \lambda}\right)=\left(\matrix{\sum_k x_k \\ \sum_k y_k\\ \sum_k x_k y_k}\right) $$

Attached a MATHEMATICA script showing the calculation steps

(** Data generation **)
a = 1;
b = 2;
c = 3;
d = 4;
xmax = 3;
f[x_] := 10 (a + b x)/(c + d x)
gr1 = Plot[f[x], {x, 0, xmax}, PlotRange -> All, PlotStyle -> Dashed];
data = Table[{x, (f[x] + 0.05 RandomReal[{-1, 1}])}, {x, 0, xmax, 0.1}];
xk = Transpose[data][[1]];
yk = Transpose[data][[2]];
n = Length[data];

(** Linear system solving **)

M = {
    {Sum[xk[[k]] yk[[k]], {k, 1, n}], Sum[xk[[k]]^2 yk[[k]], {k, 1, n}], -Sum[xk[[k]]^2, {k, 1, n}]},      
    {Sum[yk[[k]]^2, {k, 1, n}], Sum[xk[[k]] yk[[k]]^2, {k, 1, n}], -Sum[xk[[k]] yk[[k]], {k, 1, n}]}, 
    {Sum[xk[[k]] yk[[k]]^2, {k, 1, n}], Sum[xk[[k]]^2 yk[[k]]^2, {k, 1, n}], -Sum[xk[[k]]^2 yk[[k]], {k, 1, n}]}};
B = {Sum[xk[[k]], {k, 1, n}], Sum[yk[[k]], {k, 1, n}], Sum[xk[[k]] yk[[k]], {k, 1, n}]};
sol = LinearSolve[M, B];

(** Final results **)

g[x_] := (1 + lambda x)/(mu + eta x) /. Thread[{mu, eta, lambda} -> sol]
gr2 = ListPlot[data];
gr3 = Plot[g[x], {x, 0, xmax}, PlotStyle -> Red];
Show[gr1, gr2, gr3]

enter image description here