My problem is the following:
A laser gives out a bunch of data points which are reflected off a metal surface and recorded by a camera attached to the side of the laser. The image the camera receives is however distorted.
In order to calibrate the camera I need to find a function of two variables (f(x,y)) which transforms the distorted (wrong) data points back into their originals so that the camera image can be used for accurate analysis.
I know the location (x and y values) of the original image and their corresponding camera positions (x' and y').
How can I use these to find a transfer function between the two data sets? I have already used SVD and a 6th order polynomial merit function for multidimensional fits I found in "Numerical Recipes", and although I get resonable results, they are not accurate enough.
Any help is greatly appreciated!!
This is a basic ML problem, I think you just need some help about how to handle ML techniques.
First, you want to generalize from your observation, so you need to be able to test your mnodel. To do so, divide your data set into folds of 20% for instance. Then fit a model on 80% of your data, and test it on the remaining 20%. This gives you an idea of its real-world performance. For a given architecture (6th order polynomial for instance), train models on each of the 80% set possible. Then test them on their corresponding test set (the 20% they were not trained on). This technique is called cross-validation.
Using a high order model can lead to overfitting, beware of that. Try lower order first, then increase order and stop when it's not getting any better.