If I have a set of known values, i.e
X Y
0.81300, 4.9900
0.84500, 3.6900
0.86400, 3.0700
0.94000, 1.5000
0.94300, 1.4600
How would I make as accurate a curve as possible to fit all these points?
I have that the equation should be in the form of,
y=e^((x-c1)/c2)
or
x=c1+(c2 * ln(y))
So what I am really trying to do I suppose is find the constants C1 and C2.
So for the above C1 = 0.98291 and C2 = -0.10574.
The issue here is that I need to be able to compute C1 and C2 on the fly, in a program, where the X values of the above set can change, however The Y values will remain constant and be pre-determined.
It has been suggested I use Levenberg–Marquardt, and also that maybe I could do it with matrices and linear algebra. I however do not pretend to know where to start with Levenberg–Marquardt (how it works, what it does, or anything), and as for the Linear Algebra I am extremely rusty.
Anyone who could explain this for me and how I might compute it would have my gratitude and appreciation.
Also I should mention that the number of data point can be variable, but that shouldn't matter right? the More data sets the more accurate. (Unless I am wrong, boy I m out of my element.)
For matrices I was thinking about setting it up like this (again I do not know if I am doing this wrong)
A X B
| 1 ln(x1)| |y1|
| 1 ln(x2)| |C1| = |y2|
| 1 ln(x3)| |C2| |y3|
And then doing something like X=(A(Transpose)A)^-1 * (A(Transpose)b)
Only I also don't know what exactly I am doing there
(Also I do not know if my tags are correct.)
Let's assume that your data is of the form
$$x = c_1+c_2 \ln{y}$$
Then we may apply linear least-squares. If we have $N$ measurements, then we have
$$\left ( \begin{array} \\x_1 \\x_2\\\cdots\\x_N\end{array} \right ) = \left ( \begin{array} \\1 & \ln{y_1} \\1 & \ln{y_2}\\\cdots\\1 & \ln{y_N}\end{array} \right ) \left ( \begin{array} \\c_1\\c_2\end{array}\right)$$
We may write in matrix form:
$$\mathbf{X} = \mathbf{A} \cdot \mathbf{C}$$
The least-squares formulation for $\mathbf{C}$ is as follows:
$$\mathbf{C}_{LS} = \left ( \mathbf{A}^T \mathbf{A}\right)^{-1} \mathbf{A}^T \mathbf{X}$$
You can work out the least-square coefficients explicitly:
$$c_1 = \frac{\sum_{n=1}^N x_n \sum_{k=1}^N \ln^2{y_k} - \sum_{n=1}^N x_n \ln{y_n} \sum_{k=1}^N \ln{y_k}}{\sum_{k=1}^N \ln^2{y_k}-\left ( \sum_{k=1}^N \ln{y_k}\right)^2} $$ $$c_2 = \frac{\sum_{n=1}^N x_n \ln{y_k} - \left(\sum_{n=1}^N x_n\right) \left( \sum_{k=1}^N \ln{y_k}\right)}{\sum_{k=1}^N \ln^2{y_k}-\left ( \sum_{k=1}^N \ln{y_k}\right)^2} $$