What is going on in this proof about finitely-generated modules?

48 Views Asked by At

So this is proposition $4.8$ in Lorenzini's "An invitation to arithmetic geometry":

Let $A$ be a domain, integrally closed, in its field of fractions $K$. Let $L/K$ be a separable extension of degree $n$. Let $\{ e_1, ..., e_n \} \subset B$ be a basis for $L$ over $K$ where $B$ is the integral closure of $A$ in $L$. Then there exists a nonzero element $d \in A$ such that the $A$-module $B$ is contained in the free $A$-module generated by $e_1/d , ... , e_n/d$.

So in the proof they begin taking $\alpha \in L$ and representing it as a sum

$\alpha = x_1 e_1 + \dotsb + x_n e_n$ with $x_i \in K$, which can be done uniquely.

They then say, for some nonzero $d \in A$, that

$\alpha = d x_1 (e_1 / d) + \dotsb + d x_n (e_n /d)$ - so far so good.

The next claim they make I do not understand:

"We need to show the existence of a nonzero element $d$ such that $d x_i \in A$ for all $i = 1, ..., n$, whenever $\alpha = \Sigma_{i=1}^n x_i e_i$ is integral over $A$."

$Q1.$ Why does this imply the Proposition?

Moving forward, they fix an algebraic closure $\overline{K}$ of $K$, denote the $n$ distinct embeddings $\sigma_i \colon L \to \overline{K}$ as such, and define $M := (\sigma_i (e_j))_{1 \leq i, j \leq n}$ with adjoint matrix $M^*$. They then make 2 claims.

Claim $(i)$: Since the entries of $M^*$ are $(n-1) \times (n-1)$ minors of $M$, they are integral over $A$.

Claim $(ii)$: Since $\alpha \in B$, each $\sigma_i(\alpha) ,$ $ i = 1,...,n$, is integral over $A$.

$Q2.$ The entries of $M^*$ are sums of products of $\sigma_i(e_j)$, which, by definition of $\sigma_i : L \to \overline{K}$ land you in the algebraically closed field $\overline{K}$. What guarantees us that any of these sums of products of elements will actually be integral over $A$ seeing as they are not necessarily in $K$?

$Q.3$ Why would an arbitrary element $\alpha \in L$ be in $B$? If this statement is incorrect, what would make it correct, such that $\sigma_i(\alpha)$ is actually integral over $A$?

After this, they conclude that $\operatorname{det}(M) x_i$ is integral over $A$, but it may not be in $K$.

$Q.4$ How could this be possible? Could someone give me an example? It would have to involve some complex roots perhaps...

The rest of the proof looks tame enough given the above can be made sense of. Thank you. Let me know if someone would like a picture of the proof's text and I could maybe provide it.

EDIT: I realize that by considering $\alpha$ integral over $A$ it will definitely be in the integral closure of $A$, which here is $B$, so $Q.3$ should ask why that means that $\sigma_i(\alpha)$ is integral over $A$.

EDIT2: Ok I understand that basically $\sigma_i(\alpha)$ is integral over $A$, broadly, because homomorphisms commute with polynomials, so $\sigma(f(\beta)) = f(\sigma(\beta))$ and if $\beta$ was integral in the ring of coefficients of $f$, then $f(\sigma(\beta))$ witnesses the integrality of $\sigma(\beta)$ in the same ring of coefficients. So mostly just wondering about $Q.4$ now.

Also I get that $Q.1$ is because then $\alpha = \Sigma_i d x_i e_i/d$ so its an $A$-linear combination of the $e_i/d$ and so is contained in the $A$-module generated by these elements.