How do we know that the only polynomials $f$ satisfying $f(\lambda x_0,\ldots,\lambda x_n) = f( x_0,\ldots,x_n) $ are the constant polynomials?

153 Views Asked by At

In algebraic geometry one proves that the affine coordinate ring of $\mathbb{P}^n$ is trivial by using that the only polynomials $f$ satisfying $f(\lambda x_0,\ldots,\lambda x_n) = f(x_0,\ldots,x_n) $ are the constant polynomials. Geometrically this is obvious, if we're dealing with polynomials over $\mathbb{R}$ at least, but I'm also not sure how to prove it.

Question. How do we know that the only polynomials $f$ satisfying $f(\lambda x_0,\ldots,\lambda x_n) = f( x_0,\ldots,x_n) $ are the constant polynomials?

Remark. Robert Lewis solved this by assuming the identity was true for all $\lambda.$ However AFAIK, in the context of algebraic geometry we only know that it's true for all non-zero. Perhaps it's possible to adapt his proof somehow?

2

There are 2 best solutions below

0
On BEST ANSWER

I am suspicious about the premise of your question: one cannot prove that the coordinate ring of $\mathbb{P}^{n}_{k}$ is $k$ this way without further assumptions on $k$, since otherwise it is not true that the only polynomials $f \in k[X_{0}, \ldots, X_{n}]$ satisfying $f(\lambda X_{0}, \ldots, \lambda X_{n}) = f(X_{0}, \ldots, X_{n})$ for all $\lambda \in k^{\times}$ are the constant polynomials. Indeed, suppose $k$ is the finite field with $p^{n}$ elements, and take (e.g.) $f = 1+X_{0}^{p^{n}-1}$. In this case, $$f(\lambda X_{0}, \ldots, \lambda X_{n}) = 1+(\lambda X_{0})^{p^{n}-1} = 1+\lambda^{p^{n}-1}X_{0}^{p^{n}-1} = 1 + X_{0}^{p^{n}-1},$$ for all $\lambda \in k^{\times}$, since every such element satisfies $\lambda^{p^{n}-1} = 1$.

If $k$ is not finite, then your claim is true. The way to see it is this: suppose $f$ is nonconstant. The ring $k[X_{0}, \ldots, X_{n}]$ has a natural grading on it, namely that of monomial degree; let $g$ be the homogeneous summand of $f$ of greatest degree $d > 0$. It is not hard to see that if $g$ is homogeneous of degree $d$, then $g(\lambda X_{0}, \ldots, \lambda X_{n}) = \lambda^{d}g(X_{0}, \ldots, X_{n})$. Hence, since multiplying each variable argument by $\lambda$ does not change the degree of any monomial term, we see that $f(\lambda X_{0}, \ldots, \lambda X_{n}) = f(X_{0}, \ldots, X_{n})$ can hold only if $\lambda^{d} = 1$. Since there are only finitely many $d$th roots of unity in $k$, and $k$ is infinite, $f(\lambda X_{0}, \ldots, \lambda X_{n}) = f(X_{0}, \ldots, X_{n})$ can hold for at most finitely many nonzero elements of $k$. (You can see that there is not necessarily any contradiction if $k$ is finite via the example above!)

0
On

If

$f(\lambda x_0, \lambda x_1, \ldots, \lambda x_n) = f(x_0, x_1, \ldots, x_n) \tag 1$

for all $\lambda$ in some (base) field, then taking $\lambda = 0$, $(x_0, x_1, \ldots, x_n)$ arbitrary yields

$f(x_0, x_1, \ldots, x_n) = f(0 \cdot x_0, 0 \cdot x_1, \ldots, 0 \cdot x_n) = f(0, 0, \ldots, 0), \tag 2$

which looks pretty constant to me.