When does the sequence $a_{n+2}=a_{n+1}+\frac{a_n}{n^2}$ converge in the p-adics?

168 Views Asked by At

For what p-adic numbers $a_1,a_2$ does the recurrence $a_{n+2}=a_{n+1}+\frac{a_n}{n^2}$ converge? This is inspired by a question that was asked originally but to do with real numbers; this is for fun just to see "what if we change the field?".

If it does converge, then because the Cauchy criteria in an ultrametric space is $|a_{n+2}-a_{n+1}|\to 0$ then this means $|\frac{a_n}{n^2}|\to 0$ and so $a_n \to 0$. As a trivial case, $a_1=a_2=0$ works, and I would suspect this is the only time it converges.

Since $\frac{1}{n^2}$ can become arbitrarily large, it is competing against $a_n$ terms which get arbitrarily small, but I can't seem to work out a way to force a contradiction that the $a_n$ terms don't miraculously get small fast enough for it to converge.

Some partial results, if two consecutive terms are $0$, then all the terms are $0$. So there is no alternative way to end up with an eventually all zero sequence. We also have $-a_2 = \sum_{n=1}^\infty \frac{a_n}{n^2}$ which I don't see a way to make useful unfortunately.

2

There are 2 best solutions below

1
On BEST ANSWER

Since $|a_{n+2}|_p\leq\max\left(|a_{n+1}|_p,\left|\frac{a_n}{n^2}\right|_p\right)$ with equality when they aren't equal, either $|a_n|_p$ or $|a_{n+1}|_p$ must be at least $|a_1|_p$. Notice that the $\frac1{n^2}$ only serves to increase the norm.

Suppose it converges, it means that $\left|\frac{a_n}{n^2}\right|_p<$ approaches $0$. But since $\left|\frac{a_n}{n^2}\right|_p=\left|a_n\right|_p\left|\frac1{n^2}\right|_p\geq|a_n|_p$ whenever $(n,p)=1$, it means $|a_n|_p$ approaches $0$ where we restrict $n$ to be coprime to $p$, but this is a contradiction to the first statemnet.

0
On

Unfortunately I wasn't able to get a complete answer, but instead I was able to describe that solutions can at most span a 1 dimensional linear space.

First let's show that any $\mathbb{Q}_p$ linear combination of solutions is also a solution. In particular let's suppose $(a_n)_{n \ge 1}$ and $(b_n)_{n \ge 1}$ are solutions, then $c_n=k a_n+b_n$ is a solution,

$$c_{n+2} = k a_{n+2}+b_{n+2} = k(a_{n+1}+\frac{a_n}{n^2})+ b_{n+1}+\frac{b_n}{n^2} = ka_{n+1}+b_{n+1} + \frac{ka_n+b_n}{n^2} = c_{n+1}+\frac{c_n}{n^2}$$

This means whenever we have a nontrivial solution, we have an entire subspace of solutions and since the first two terms of the sequence determine the behavior of all other terms, there is at most a 2 dimensional linear space of solutions. Now we can write the recurrence in terms of matrices that depend on the term as,

$$\begin{pmatrix}0 & 1\\ \frac{1}{n^2} & 1\end{pmatrix} \begin{pmatrix}a_n \\ a_{n+1}\end{pmatrix} = \begin{pmatrix}a_{n+1} \\ a_{n+2}\end{pmatrix}$$

We can then make the matrix (the product multiplies on the left as $k$ increments),

$$A_n = \prod_{k=1}^n \begin{pmatrix}0 & 1\\ \frac{1}{k^2} & 1\end{pmatrix}$$

Which gives us,

$$A_n\begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}a_{n+1} \\ a_{n+2}\end{pmatrix}$$

In order for $\begin{pmatrix}a_1 \\ a_2\end{pmatrix}$ to be a solution, we need $\lim_{n \to \infty} a_n=0$ as established in the original question, and so this means we have,

$$\lim_{n \to \infty} A_n\begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}$$

Now let's suppose there is a 2 dimensional space of solutions, then that means for all $a_1,a_2$ we have that this limit goes to $0$. That means we necessarily have that,

$$\lim_{n \to \infty}A_n = 0$$

Which further implies that,

$$\lim_{n \to \infty} \det(A_n)=0$$

Now we can look at the determinant,

$$\det(A_n) = \prod_{k=1}^n \det\begin{pmatrix}0 & 1\\ \frac{1}{k^2} & 1\end{pmatrix} = \prod_{k=1}^n \frac{-1}{k^2} = \frac{(-1)^n}{n!^2} \not \to 0$$

Since $\frac{(-1)^n}{n!^2}$ diverges in every $\mathbb{Q}_p$ as $n$ increases, we have a contradiction. So there is at most a 1 dimensional space of solutions.


Here are some thoughts on how I'm looking to prove there is only the trivial solution $a_1=a_2=0$ but have been unsuccessful so far.

I considered taking a vector $v$ from the non solutions (since we have just established it exists) and projecting it out and repeating the argument with $A_n (I-\frac{vv^T}{v^Tv})$, but the determinant of that matrix will be $0$ and won't work, but maybe thinking along these lines can work some way.

Another approach which could side-step most of this reasoning is to simply find two linearly independent vectors that don't converge, this would also prove there are no solutions except for the $0$ vector as they span the space. On the other hand, since we know there's a non solution, it may be possible to show that it has a subsequence that's within a certain distance of a linearly independent solution which takes it with it to diverge as well.

I welcome any and all ideas or comments!