Factorisation of large polynomials and Galois theory

133 Views Asked by At

As I understand it, one of the consequences of Galois theory is that there is no way of expressing the solutions to a general polynomial of degree 5 or higher in terms of radicals. Would a theory that can produce a way of expressing the solutions to a polynomial under specific restrictive conditions contradict this theory, or would that be a loophole of some kind?

I have thought of a method of factoring large polynomials that requires the roots to have certain properties and it makes sense, on a basic level, but I would like to know if it can actually work for large polynomials.

EDIT 1:

$f(x) = ax^2 + bx + c$

$f(x) = [a, b, c].[x^2, x, 1]$

the two vectors are orthogonal when $f(x) = 0$ there are two vectors were they are orthogonal the cross product of those two vectors will give an vector perpendicular to $[a, b, c]$ this gives 3 equations and 3 variables to solve.

The concept is that for an $n$ degree polynomial it is possible to create $n-1$ vectors in $R^n$ that can be placed in a matrix with row one containing unit vectors, solve for the determinant of the matrix, produce n sets of linear equations with n variables, including the spacial variable, and n equations. The variables can then be solved using the simultaneous equations if there are no repeated roots or a solution at zero.

If there is a solution at zero then the determinant has a zero column and is therefore zero and the set of solutions cannot be found with this method.

If there is a repeated root to the polynomial then there are two rows that are equal and a single row addition can give a zero row, and therefore the determinant of the system is zero and the set of solutions cannot be found with this method.

EDIT 2:

a cubic function would look something like this:

$$f(x) = ax^3 + bx^2 + cx + d$$ $$f(x) = [a,b,c,d].[x^3,x^2,x,1]$$

has solutions say, s, t and r then we can write three solutions

$$0 = [a,b,c,d].[s^3,s^2,s,1]$$ $$0 = [a,b,c,d].[t^3,t^2,t,1]$$ $$0 = [a,b,c,d].[r^3,r^2,r,1]$$

finding a vector orthogonal to these three vectors can be done by generalizing the cross product to higher dimensions and finding the determinant:

\begin{array}{cccccc} i & j & k & l \\ s^3 & s^2 & s & 1 \\ t^3 & t^2 & t & 1 \\ r^3 & r^2 & r & 1\\ \end{array}

the determinant of this matrix is then parallel to the vector that they are all each orthogonal to, ie: the vector or the coefficients $[a,b,c,d]$ so we can say that it is equal to $\lambda [a,b,c,d]$, expanding the determinant then gives four equations and four variables, s,t,r and $\lambda$