Can the Lagrange Interpolating polynomial be used in a machine learning algorithm?

1.3k Views Asked by At

My understanding of the Lagrange interpolating polynomial is that given $\space n \space$ points, we can fit a polynomial of degree $\space n+1 \space$ as a means of approximating the values between the points.

This polynomial is defined as:

$$L_n(x) = {(x-x_1)(x-x_2)...(x-x_n) \over (x_0-x_1)(x_0-x_2)...(x_0-x_n)}y_0 +...+ {(x-x_1)(x-x_2)...(x-x_{n-1}) \over (x_n-x_1)(x_n-x_2)...(x_n-x_{n-1})}y_n$$

Could this be more reliable in a machine learning algorithm that deals with one numerical input and one numerical output, where the training data is a set of points $\space (x,y) \space$, with $\space x=input \space$ and $\space y=output \space$ . Rather than using neural networks?

1

There are 1 best solutions below

3
On BEST ANSWER

The problem with interpolation polynomials is that "they are trying too hard to fit the data points" and end up missing the structure of the dataset. This problem is called overfitting and you should be aware of it. It happens when you use a larger than necerssary degree polynomial for regression.