Does every polynomial divide its (Galois-theoretic) norm?

48 Views Asked by At

In Hermann Weyl's The Algebraic Theory of Numbers (1940 [reprinted 1951]), section 1.9 is a discussion of change of base-field (I believe).  I'm having a bit of trouble following.

Let $\kappa$ be a degree-$n$ field extension of $k$.  Then $\kappa[\vec{x}]$ is a vector space of degree $n$ over $k[\vec{x}]$ — in particular, it shares a basis with $\kappa/k$.  (Left-)Multiplication by elements of the form thus has a characteristic polynomial, its field equation, and determinant, its norm.  Weyl writes:

The field equation of the polynomial $\varphi(\vec{x})$ in $\kappa[\vec{x}]$ over $k[\vec{x}]$ will be $$N_{\kappa[\vec{x}]/k[\vec{x}]}(t-\varphi(\vec{x}))=t^n-f_1t^{n-1}\pm\cdots\pm f_n$$ where the coefficients $f_i\in k[\vec{x}]$. By expressing the fact that this polynomial of $t$ vanishes after the substitution $t=\varphi(\vec{x})$ we get the equation \begin{equation} f_n=N_{\kappa[\vec{x}]/k[\vec{x}]}(\varphi(\vec{x})) \\ =\varphi(\vec{x})\cdot\{f_{n-1}-f_{n-2}t\pm\dots\pm t^{n-1}\}_t=\varphi(\vec{x})' \tag{*} \end{equation} proving the important principle that the norm of $\phi(\vec{x})$ contains $\varphi(\vec{x})$ itself as a factor.

I've tried to have a light touch with the notation. I think the $\phi$ is just a typo for $\varphi$. The prime doesn't seem to mean differentiation, and might be just a comma. If more context is necessary, the text is available via Google books preview.

I can follow him up to the second equals sign in (*). What is the $\{\dots\}_t$ construction? How does the second term arise? It would seem as though it somehow represents the product of all other Galois conjugates of $\varphi$ under the automorphisms of $\kappa/k$, but that such a product could be expressed in terms of the $\{f_j\}_{j=1}^n$ is not at all obvious to me.

1

There are 1 best solutions below

0
On BEST ANSWER

Let $L/k$ a finite field extension of degree $N$ and $A = k[x]$ the polynomial ring and $B = A \otimes_k L = L[x]$.

Choosing a basis $L = \sum_{n=1}^N k b_j$ we have a basis $B = \sum_{n=1}^N A b_j$ and an isomorphism of $A$-module $B \to A^n$. For $u \in B$ find the coefficients $m(u)_{i,j}$ such that $u b_j= \sum_{i=1}^N m(u)_{i,j} b_i$, then $m(u)$ is a matrix $\in M_n(A)$ and $u \mapsto m(u)$ is a ring embedding $B \to M_n(A)$. Let

$$N_{B/A}(u) = \det(m(u)) \in A$$ Note $m(u)$ depends on the chosen basis $b_j$ but $\det(m(u))$ doesn't. Let $I$ be the identity matrix of $M_n(A)$, see $t$ as the unknown of $A[t]$ so $tI - m(u) \in M_n(A[t])$ (having $t$-unknowns only on the diagonal) and let $$P(u)(t) = \det(t I - m(u)) = \sum_{l=0}^N f_{N-l} (-1)^{N-l}t^l \in A[t]$$

Then $f_N = (-1)^N P(u)(0) = \det(m(u)) = N_{A/B}(u)$.

What Weyl is saying is that $P(u)(t)$ being a polynomial we can evaluate it at $t= m(u)$

and - Caley-Hamilton theorem - we'll obtain $$P(u)(m(u)) = 0 \implies \sum_{l=0}^N f_{N-l}(-1)^{N-l} m(u)^l = 0 \in M_n(A)$$ exactly as if we had $P(u)(m(u)) = \det(m(u) I -m(u))=\det(0) = 0$ (which we don't, $P(u)(a) = \det(a I - m(u))$ is true only for scalars $a \in A$ but not for matrices $\in M_n(A)$)

Then the point is that from $f_{N-l} \in A$ and our embedding $B \to M_n(A)$ we obtain $$\sum_{l=0}^N f_{N-l}(-1)^{N-l} m(u)^l = 0 \in M_n(A) \implies \sum_{l=0}^N f_{N-l}(-1)^{N-l} u^l = 0 \in B$$

And hence $$N_{B/A}(u) = f_N = - u\sum_{l=1}^N f_{N-l}(-1)^{N-l} u^{l-1}\in B$$ which is a non-trivial statement about divisibility of norms of polynomials.