Solving coefficients in n-dimensional polynomial with (x,y) dataSet

250 Views Asked by At

I want to get coefficients in n-dimensional polynomial with x,y.

I do not really understand Abel–Ruffini theorem very well. May be he say, if n is bigger than 5, you can not get x,y from n-dimensional polynomial in algebraically way.

I get it .Then how about getting coefficients in n-dimensional polynomial when n is bigger than 5.

$$ y = a_n{x}^n+ a_{n-1}{x}^{n-1} + \cdots + a_2{x}^2+a_1x + a_0 $$

$$ (a_n,a_{n-1},\cdots,a_1,a_0:unknown constant) $$ $$ n\geqq 5 $$

I want to get each coefficients. $$ a_n,a_{n-1},\cdots,a_1,a_0 $$

Then how many (x,y) dataset need? (Same as number of coefficients ?)

Is there any sophisticated solution?(In advance ,I am not good at matrix) OR is that impossible?

1

There are 1 best solutions below

2
On BEST ANSWER

I think your question about solving for the coefficients given a set of $(x_i,y_i)$ data and comment about not understanding Abel-Ruffini theorem are not entirely related.

Regarding Abel-Ruffini's theorem - There simply exists no general formula for solving the zeros of polynomials that are of degree $5$ and higher - there is a quadratic, cubic, and quartic formula, but that's it.


Regarding solving for coefficients - Yes, you need as many points as there are coefficients. That is, given $n$ points, you will need an $n-1$ degree polynomial to pass through all $n$ points for any integer $n>1$.

Suppose the data set is

$$\Big\{(x_1,f(x_1)),(x_2,f(x_2)),...,(x_n,f(x_n))\Big\}$$

There is a solution that involves matrices. The coefficient matrix is the Vandermonde matrix such that given a polynomial

$$f(x)=a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+...+a_1x+a_0$$

\begin{gather} \begin{bmatrix} a_0 \\ a_1 \\ .\\.\\.\\a_{n-1} \end{bmatrix} =V^{-1}\begin{bmatrix} f(x_1) \\ f(x_2) \\ .\\.\\.\\f(x_n) \end{bmatrix} \end{gather}

Where in the article, each $\alpha_i=x_i$


Without matrices, determining the coefficients boils down to slighty tedious and iterative computations. There are different methods, but deep down, they are the same.

Newton's method states given the same data set as above containing $n$ points, the polynomial that passes through each point can be written in the form

$$f(x)=b_1+b_2(x-x_1)+b_3(x-x_1)(x-x_2)+...+b_n(x-x_1)(x-x_2)...(x-x_{n-1})$$

Where

$$b_1=f(x_1)\\b_2=f[x_2,x_1]=\frac{f(x_2)-f(x_1)}{x_2-x_1}\\b_3=f[x_3,x_2,x_1]=\frac{f[x_3,x_2]-f[x_2,x_1]}{x_3-x_1}\\.\\.\\.\\b_n=\frac{f[x_n,x_{n-1},...,x_2]-f[x_{n-1},x_{n-2},...,x_1]}{x_n-x_1}$$


Equivalently, Lagrange's method for determining the coefficients of the polynomial that passes through all of the same $n$ points as previous is

$$f(x)=\sum_{i=1}^nL_i(x)f(x_i)$$

Where

$$L_i(x)=\prod_{j=1}^n\frac{x-x_j}{x_i-x_j},\space\space\space j\neq i$$

Which looks admittedly complicated, but each $x_i$, $x_j$, and $f(x_i)$ is known, as these are the given datapoints.

After solving for each coefficient in either method, distributing, and simplifying into standard form, you will find the coefficients of interest.

That is, given $n$ points and $f(x)$ referring to either Newton's or Lagrange's polynomial shown above,

$$f(x)=a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+...+a_1x+a_0$$