Zariski closure of $\{ (z, \bar z) : z\in\Bbb C \}$

110 Views Asked by At

The question I am intersted in is: If a polynomial $p\in\Bbb C[x,y]$ has the property $\forall z\in\Bbb C: p(z,\bar z)=0$, is $p$ automatically the zero polynomial? Equivalent formulations: Is every nonzero polynomial nonzero at some point $(z,\bar z)$? Is the Zariski closure of $\{(z,\bar z):z\in\Bbb C\}$ the entire space $\Bbb C^2$?

I am almost certain that the answer must be yes, but I can't think of a proof.

1

There are 1 best solutions below

2
On BEST ANSWER

Set $q(x,y) = p(x+iy,x-iy)$, so $p(x,y) =q\left(\frac{1}{2}(x+y),\frac{1}{2i}(x-y)\right)$ in $\mathbf C[x,y]$. Therefore $p = 0$ if and only if $q = 0$ in $\mathbf C[x,y]$.

For $(a,b) \in \mathbf R^2$, let $z = a+bi$. Then $q(a,b) = p(a+bi,a-bi) = p(z,\overline{z}) = 0$, so $q$ vanishes on all of $\mathbf R^2$. Break up $q(x,y)$ into real and imaginary parts as a polynomial: $q(x,y) = u(x,y) + iv(x,y)$ where $u(x,y)$ and $v(x,y)$ are in $\mathbf R[x,y]$. Then for all $(a,b) \in \mathbf R^2$, $$ 0 = q(a,b) = u(a,b) + iv(a,b), $$ so $u(a,b) = 0$ and $v(a,b) = 0$. Thus $u(x,y)$ and $v(x,y)$ in $\mathbf R[x,y]$ each vanish on all of $\mathbf R^2$.

It is a standard result for an infinite field $F$ and positive integer $n$ that the only polynomial in $F[x_1,\ldots,x_n]$ that vanishes on all of $F^n$ is the zero polynomial (proof uses induction on $n$). Taking $F = \mathbf R$ and $n = 2$, we conclude that $u(x,y) = 0$ and $v(x,y) = 0$. Thus $q(x,y) = u(x,y) + iv(x,y) = 0$, so $p(x,y) = 0$. That answers the original question. And a similar argument (with one additional part) yields the following more general result, whose proof takes up the rest of this answer.

Theorem. For an infinite field $K$ and Galois extension $L/K$ with degree $n$, if we enumerate the elements of ${\rm Gal}(L/K)$ as $\{\sigma_1,\ldots,\sigma_n\}$ and a polynomial $p(x_1,\ldots,x_n) \in L[x_1,\ldots,x_n]$ satisfies $p(\sigma_1(\alpha),\ldots,\sigma_n(\alpha)) = 0$ for all $\alpha \in L$, then $p = 0$ as a polynomial.

The original question is the special case $L = \mathbf C$ and $K = \mathbf R$, where the Galois group is the identity and complex conjugation on $\mathbf C$.

The proof of the special case above guides us to a proof of the more general case. Pick a $K$-basis $e_1,\ldots,e_n$ of $L$. Each $\alpha$ in $L$ has the form $\sum_{j=1}^n c_je_j$, so $\sigma_i(\alpha) = \sum_{j=1}^n \sigma_i(e_j)c_j$. That suggests introducing the new polynomial $$ q(x_1,\ldots,x_n) = p\left(\sum_{j=1}^n \sigma_1(e_j)x_j, \ldots, \sum_{j=1}^n \sigma_n(e_j)x_j\right) \in L[x_1,\ldots,x_n], $$ so $q(c_1,\ldots,c_n) = p(\ldots,\sigma_j(\alpha),\ldots) = 0$ for all $(c_1,\ldots,c_n) \in K^n$.

Note. For $L = \mathbf C$, $K = \mathbf R$, $\sigma_1$ being the identity, $\sigma_2$ being complex conjugation, and using the $\mathbf R$-basis $\{1,i\}$, this construction of $q(x_1,x_2)$ from $p(x_1,x_2)$ is $q(x_1,x_2) = p(x_1+x_2i,x_1-x_2i)$, so we have recovered the construction of $q(x,y)$ from $p(x,y)$ in the original setting of the problem.

Since $L = \bigoplus_{j=1}^n Ke_j$, we also have $$ L[x_1,\ldots,x_n] = \bigoplus_{j=1}^n K[x_1,\ldots,x_n]e_j. $$ Breaking up every coefficient of $q(x_1,\ldots,x_n)$ into a $K$-linear combination of $e_1,\ldots,e_n$, we break up $q$ into a $K[x_1,\ldots,x_n]$-linear combination of $e_1,\ldots,e_n$: $$ q(x_1,\ldots,x_n) = u_1(x_1,\ldots,x_n)e_1 + \cdots + u_n(x_1,\ldots,x_n)e_n $$ where $u_1,\ldots,u_n \in K[x_1,\ldots,x_n]$. Then for all $(c_1,\ldots,c_n) \in K^n$, $$ 0 = q(c_1,\ldots,c_n) = \sum_{j=1}^n u_j(c_1,\ldots,c_n)e_j $$ in $L$, so $u_j(c_1,\ldots,c_n) = 0$ for $j=1,\ldots,n$. Thus each $u_j$ in $K[x_1,\ldots,x_n]$ vanishes on all of $K^n$. Since $K$ is an infinite field, we conclude that each $u_j$ is the zero polynomial in $K[x_1,\ldots,x_n]$. Thus $q(x_1,\ldots,x_n) = \sum_{j=1}^n u_j(x_1,\ldots,x_n)e_j = 0$, so $$ p\left(\sum_{j=1}^n \sigma_1(e_j)x_j, \ldots, \sum_{j=1}^n \sigma_n(e_j)x_j\right) = q(x_1,\ldots,x_n) = 0. $$ We want to derive from this that $p(x_1,\ldots,x_n) = 0$.

Note. For $L = \mathbf C$, $K = \mathbf R$, $\sigma_1$ being the identity, $\sigma_2$ being complex conjugation, and using the $\mathbf R$-basis $\{1,i\}$, we'd have $q(x_1,x_2) = p(x_1+x_2i,x_1-x_2i) = 0$ and want to conclude that $p(x_1,x_2) = 0$, which is easy since $p(x_1,x_2) = q\left(\frac{1}{2}(x_1+x_2),\frac{1}{2i}(x_1-x_2)\right)$. Instead we could argue that $\binom{x_1+x_2i}{x_1-x_2i} = (\begin{smallmatrix}1&i\\1&-i\end{smallmatrix})\binom{x_1}{x_2}$ and that $2 \times 2$ matrix has nonzero determinant, so an invertible linear change of variables turns $p(x_1+x_2i,x_1-x_2i) = 0$ into $p(x_1,x_2) = 0$. That linear algebra method is the approach we'll take in the general case. This is the "one additional part" referred to before the statement of the theorem.

We have $$ \begin{pmatrix} \sum_{j=1}^n\sigma_1(e_j)x_j\\ \vdots \\ \sum_{j=1}^n\sigma_n(e_j)x_j \end{pmatrix} = (\sigma_i(e_j)) \begin{pmatrix} x_1\\ \vdots \\ x_n \end{pmatrix}, $$ so it suffices to show the $n \times n$ matrix $(\sigma_i(e_j))$ has a nonzero determinant. We'll present this part in two steps.

Step 1: if that determinant is nonzero for one $K$-basis of $L$ then it is nonzero for all $K$-bases of $L$.

For two $K$-bases $e_1,\ldots,e_n$ and $e_1',\ldots,e_n'$ of $L$, let $e_k' = \sum_{j=1}^n a_{jk}e_j$ where $a_{jk} \in K$. The change-of-basis matrix $(a_{jk})$ has nonzero determinant and we have the equation of $n \times n$ matrices $$ (\sigma_i(e_k')) = (\sigma_i(e_j))(a_{jk}), $$ so $\det(\sigma_i(e_k')) \not= 0$ if and only if $\det(\sigma_i(e_j)) \not= 0$.

Step 2: the determinant of $(\sigma_i(e_j))$ is nonzero for some $K$-basis of $L$.

Since $L/K$ is separable, by the primitive element theorem $L = K(\gamma)$ for some $\gamma$. Use the power basis $1,\gamma, \ldots, \gamma^{n-1}$, so $e_j = \gamma^{j-1}$ for $j=1,\ldots,n$. Then $$ (\sigma_i(e_j)) = (\sigma_i(\gamma)^{j-1}), $$ which is a Vandermonde matrix (or the transpose of a Vandermonde matrix, depending on how you define such matrices). So the formula for determinants of Vandermonde matrices tells us $$ \det(\sigma_i(\gamma)^{j-1}) = \prod_{i < j} (\sigma_j(\gamma) - \sigma_i(\gamma)), $$ which is nonzero since $\sigma_i(\gamma) \not= \sigma_j(\gamma)$ for distinct $i$ and $j$ thanks to the separability of $L/K$. This completes the proof that $p(x_1,\ldots,x_n) = 0$.