'Classical' proof of Main Theorem of elimination theory by Mumford

299 Views Asked by At

I despair on a argument in the in proof of the Main Theorem of elimination theory (pges 33-35) in Mumford's Algebraic Geometry I: Complex Algebraic Varieties. The MToet states that the projection $p_2: \mathbb{P}^n \times \mathbb{P}^m \to \mathbb{P}^m$ is closed, i.e., if $Z \subset \mathbb{P}^n \times \mathbb{P}^m$ is a closed algebraic set, then so is $p_2(Z)$.

Mumford gives two proofs, the modern one using Nullstellensatz and a classical approach based on resultants. A step in the second one (pges 34-35) I not understand. Mumford makes some reduction steps and your job is finally to show that if $S \subset \mathbb{P}^1 \times \mathbb{C}^m$ is closed, then $p_2(S)$ is closed too.

Assume $S$ is defined by $f_i(Z_1,...,Z_m, X,Y)=0, 1 \le i \le l$ where $f_i \in \mathbb{C}[Z_1,...,Z_m, X,Y]$ are homogeneneous in $X$ and $Y$ of degree $d$, $Z_i$ coordinates on $\mathbb{C}^m$. Look at the resultant of the two polynomials in $X, Y$:

$$R(\sum t_i f_i(Z; X, Y), \sum s_i f_i(Z; X,Y))$$

and expand it as polynomial in $t_i$ and $s_i$:

$$R= \sum R_{\alpha \beta}(Z) t^{\alpha} s^{\beta}$$

Mumford claims that the equations $R_{\alpha \beta}(Z)=0$ for all $\alpha, \beta$ define in image $p_2(S)$.

In general if $k$ is a closed field (here $k= \mathbb{C}$) and $f(X,Y)= \sum a_i X^{n-i} Y^i, g(X,Y)= \sum b_j X^{n-j} Y^j $ are homogeneneous polynomials then the resultant

$$R(f,g):= R(a_,...,a_n; b_,...,b_m)$$

has the property to be zero iff $f$ and $g$ have a common root $ (x,y) \neq (0,0)$. Back to our business we assume $Z_0=(z_1,...,z_m) \in \mathbb{C}^m$ annihilates the resultant $R$: that is $R_{\alpha \beta}(Z_0)=0$ for all $\alpha, \beta$. Why this imply that the set $f_i(Z_0, X,Y)$ has a common zero $\neq (0,0)$ in $X,Y$?

That all boils down to question:

Assume $g_i(X,Y)\in \mathbb{C}[X,Y], 1 \ge i \ge l$ are homogeneous polynomials of degree $d$ and for every two sets $(t_1,...,t_l), (s_1,...,s_l) \in \mathbb{C}^l$ the resultant $R(\sum t_i g_i(X, Y), \sum s_i g_i(X,Y))$ is zero, i.e. $\sum t_i g_i(X, Y)$ and $ \sum s_i g_i(X,Y)$ have a common root.

Why the $g_i(X,Y)$ have a common zero $(x,y) \neq (0,0)$?

1

There are 1 best solutions below

3
On BEST ANSWER

Let's do a case where we have 3 polynomials $f,g,h$ and suppose they don't have any common root. Your hypothesis in particular implies that for any $b,c \in \mathbb C$, $bg+ch$ and $f$ have a common root and this root has to be one of the roots of $f$.

But for every root $\alpha$ of $f$, the set of $b,c$ so that $bg(\alpha) + cg(\alpha) \neq 0$ is a Zariski open set $U_\alpha$ of $\mathbb C^3$ (since either $g(\alpha) \neq 0$ or $h(\alpha) \neq 0$).

If we take the intersection of these $U_\alpha$ over the roots $\alpha$ of $f$, we get something non empty and for any $b,c$ in this set $bg+ch$ does not vanish at any root of $f$.

Contradiction!