Roots to power series in $\mathbb{Z}_p$

89 Views Asked by At

The background of this problem stems from the fact that I am currently learning about Chabauty's Method but for the purpose of this question I do not think it is particularly important.

(A special case of Strassman's Theorem) Suppose $\Lambda(X)=a_0+a_1X+a_2X^2+\cdots \in\mathbb{Z}_p[[X]]$ such that $a_j\to 0$ as $j\to\infty$. If there exists $N$ such that $|a_N|_p\geq |a_i|_p$ for all $i$ and $|a_N|_p>|a_i|_p$ for all $i>N$ then there are at most $N$ solutions in $\mathbb{Z}_p$ to $\Lambda(X)$.

I want to confirm what do we mean by the root of $\Lambda(X)$ in this case. If $\alpha$ is a root then do we mean this by the $p$ norm of the partial sum tends to $0$?

Moreover, it's good that we know an upper bound on the number of roots, I was wondering if there is an algorithmic way of actually computing such roots?

1

There are 1 best solutions below

0
On BEST ANSWER

Since $a_j \to 0$ as $j \to \infty$, we have $a_j x \to 0$ for any $x \in \newcommand{\ZZ}{\mathbb{Z}}\ZZ_p$, so $\Lambda$ converges on all of $\ZZ_p$, defining a function $\ZZ_p \to \ZZ_p$. A root of $\Lambda$ is simply an element $x \in \ZZ_p$ such that $\Lambda(x) = 0$, the same way roots are defined for any function.

To approximate roots of $p$-adic power series to arbitrary precision, we can iteratively lift using Hensel's lemma, which can be thought of as a $p$-adic analogue to Newton's method. Keith Conrad's notes on Hensel's lemma are a good reference; Theorem 8.2 in these notes gives a suitable version for power series:

Theorem 8.2. Let $f(X)$ be a power series with coefficients in $\ZZ_p$ that converges on $\ZZ_p$. If an $a \in \ZZ_p$ satisfies $\lvert f(a) \rvert_p < \lvert f'(a) \rvert_p^2$ then there is a unique $\alpha \in \ZZ_p$ such that $f(\alpha) = 0$ in $\ZZ_p$ and $\lvert \alpha - a \rvert_p < \lvert f'(a) \rvert_p$.

The proof is iterative (lifting inductively to construct solutions modulo higher and higher powers of $p$) and gives a method suitable for algorithmic implementation, giving a method of finding simple roots of $p$-adic power series.

However, just as with Newton's method, this runs into problems if the power series appears to have a multiple root, because then arbitrarily small perturbations in the power series (possibly outside whatever level of precision we're currently working with) could change whether there's a root. An analogous situation over the real numbers is if we have a function that looks like $f(x) = x^2$ and we want to determine the roots numerically: maybe it really is $x^2$ and there's a double root, but maybe it's actually $x^2 \pm 10^{-1000000}$ and there's either no roots or two distinct but very close together roots. How one handles such numerically unstable situations generally depends on the specifics of what function you're working with and exactly what information about it you know.