A property of fields

129 Views Asked by At

I was studying about Fields (specifically about finite field extensions) and I found a statement that I'm trying to prove.

For every field $K$ and $G\subset \text{Aut}(K)$ a finite subgroup, there are elements $x_1,x_2,\dots ,x_n, y_1,y_2, \dots, y_n\in K$ such that $$\sum\limits_{i=1}^nx_i\sigma(y_i)=\delta_{\sigma,id_K},$$ for every $\sigma\in G,$ where $id_K$ is the identity and $\delta_{\sigma,id_K}$ are the Kronecker delta.

I've tried to find the text where I read it but I can't find it anymore. I am assuming that $n=|G|,$ but I really don't know how to approach this, or if it's even true.

Any help would be very appreciated.

1

There are 1 best solutions below

1
On BEST ANSWER

I assume $n=|G|$.

Use the following lemma :

Lemma 1. Let $M$ be a monoid and $f_i: M\to K^\times$ a family of different monoid morphisms. Then the $(f_i)$ form a linearly independent family in $\mathcal{F}(M,K)$ (the $K$-vector space of functions $M\to K$).

This lemma is very instructive to prove so I'll leave it to you (it's fundamental in Galois theory); it's essentially a generalization of the fact that eigenvectors for different eigenvalues are automatically independant.

How does that help us ? Well it helps us in the following way : the $\sigma : K^\times\to K^\times, \sigma \in G$ form a family of different monoid morphisms, therefore they are linearly independent in $\mathcal{F}(K^\times, K)$.

We then use another lemma:

Lemma 2. Let $K$ be a field, $X$ a set and $f_1,...,f_n : X\to K$ be a family of linearly independent functions. Then there are $x_1,...,x_n \in X$ such that the matrix $(f_i(x_j))_{i,j}$ is invertible.

This lemma is a bit more complicated but not that much, here's a proof :

We prove the lemma by induction on $n$. For $n=1$, it's clear: if $f:X\to K$ only took $0$ as a value, then it wouldn't be linearly independent, so there's $x\in X$ such that $f(x)\neq 0$.

Going from $n$ to $n+1$ : assume the result holds for any family of $n$ linearly independent functions, and let $f_1,...,f_{n+1} : X\to K$ be linearly independent. In particular, $f_1,...,f_n$ are linearly independent, so we find $x_1,...,x_n$ as in the lemma. Now consider $F:x\mapsto \det (f_i(x_j))_{1\leq i,j \leq n+1}(x_{n+1} = x)$. This is a map $X\to K$. Let's prove that it takes a nonzero value: if it does we'll put $x_{n+1}$ to be one of the points where it does, so we'll be through.

As it turns out, $F(x) = \displaystyle\sum_{i=1}^{n+1} M_i f_i(x)$ where $M_i$ is a well-chosen minor (use the last row to develop the determinant). Note that $M_{n+1} = \det (f_i(x_j))_{1\leq i,j\leq n}$ up to a sign, so $M_{n+1}\neq 0$. Therefore $F$ can't be the zero function, as a nonzero linear combination of the linearly independent $f_i$: it takes a nonzero value; and we are done with the induction.

Where does that leave us ? Well simply apply lemma $1$ the $(\sigma)_{\sigma\in G}$ to get that they are linearly independent in $\mathcal{F}(K^\times, K)$, and then apply lemma $2$ to get $x_\tau, \tau \in G$ such that the matrix $M=(\sigma(x_\tau))_{\sigma, \tau \in G}$ is invertible. Then find $Y=(y_\tau)_{ \tau \in G}$ such that $MY = (\delta_{\sigma, id_K})_{\sigma\in G}$ (such a $Y$ exists because $M$ is invertible).

Writing out what this means : for all $\sigma \in G$, $\displaystyle\sum_{\tau \in G}\sigma(x_\tau)y_\tau = \delta_{\sigma, id_K}$. Now order $G$ to get your $x_i$'s, $y_i$'s.

Bonus : a second proof of lemma 2 not involving determinants : let $V$ be the (finite-dimensional) sub-vector space of $\mathcal{F}(X,K)$ generated by the $f_i$. For $x\in X$ consider $ev_x : V\to K, f\mapsto f(x)$. Then by definition of the zero map, $\displaystyle\bigcap_{x\in X}\ker (ev_x) = \{0\}$. Now argue with dimensions that some $x_1,...,x_n$ must exist so that $\displaystyle\bigcap_{i=1}^n\ker (ev_{x_i}) = \{0\}$: those are your $x_i$