For which $r \in \mathbb R$ is the series $S(r)$ finite?

275 Views Asked by At

For each $r \in \mathbb R$ we let $$L_r := \left\{ \begin{pmatrix} a+cr \\ b+dr \\ c \\d \end{pmatrix} : a,b,c,d \in \mathbb Z \right\}, \quad W:= \left\{ \begin{pmatrix} 0 \\ 0 \\ x_3 \\ x_4 \end{pmatrix} : x_3,x_4 \in \mathbb R \right\} $$ and $$ L^*_r := L_r \setminus W.$$

Now we consider the series $$S(r):= \sum_{x \in L^*_r} E_1(2 (x_1^2+x_2^2)) \exp ( (x_1^2+x_2^2) - (x_3^2+x_4^2)).$$ Here $E_1 : (0, \infty) \to (0,\infty)$ is the exponential integral $$E_1(s):=\int_1^\infty \exp(-ts) \frac{dt}{t} = \int_s^\infty \exp(-t) \frac{dt}{t}.$$

My question: For which $r \in \mathbb R$ is $S(r)$ finite?

My partial answer: It's quite easy to see that $S(r)$ is fintie for $r \in \mathbb Q$. But that's all I've found out so far.

Happy: I would already be happy if you could show for a single $r \in \mathbb R \setminus \mathbb Q$ (you pick it!) whether $S(r)$ converges or diverges.

Some facts: We have $$ S(r) = \int_1^\infty \left( \sum_{x \in L^*_r} \exp \left( (1-2t)(x_1^2+x_2^2) - (x_3^2+x_4^2) \right)\right) \frac{dt}{t}.$$ Further, we have $L_r^*=L_r \setminus \{0\}$ iff $r \in \mathbb R \setminus \mathbb Q$.

Estimates for $E_1$:

In case they help: For $s>0$ we have $$\frac{e^{-s}}{s+1} < E_1(s) < \frac{s+1}{s+2} \frac{e^{-s}}{s} < \frac{e^{-s}}{s}.$$

Proof for rational $r$: Since I was asked in the comments to provide a proof for rational $r$ here it is. If $r$ is rational the set $$\{ a+br : a,b \in \mathbb Z \}$$ is a discrete subset of $\mathbb R$. Hence $$\varepsilon := \min(\{2(x_1^2+x_2^2) : x \in L_r^*\}) > 0$$ exists. Now we make use of $$E_1(s) \le \frac{\exp(-s)}{s}.$$ We have $$S(r) \le \sum_{x \in L^*_r} \frac{\exp(-2 (x_1^2+x_2^2))}{2 (x_1^2+x_2^2)} \exp ( (x_1^2+x_2^2) - (x_3^2+x_4^2))\\ \le \frac{1}{\varepsilon} \sum_{x \in L^*_r} \exp(-(x_1^2+x_2^2+x_3^2+x_4^2))\\ \le \frac{1}{\varepsilon} \sum_{x \in L_r} \exp(-||x||^2) < \infty. $$

1

There are 1 best solutions below

1
On

Here are some thoughts. It gets sketchy at the end. Some unimportant details are not handled carefully before then. Maybe you can fill in details and gain something even if something goes awry.

Let $r \in \mathbb{R} - \mathbb{Q}$. Write $||x||_1^2 := x_1^2 + x_2^2$ and $||x||_2^2 := x_3^2 + x_4^4$. Consider two regimes, $||x||_1 > 1$ and $||x||_1 \leq 1$. Your reasoning for $r \in \mathbb{Q}$ basically handles the first regime. It shows that the contribution of the first regime is at most $$\sum_{\substack{x \in L_r \\ ||x||_1 > 1}} \exp(-||x||_1^2 - ||x||_2^2) = \sum_{c, d \in \mathbb{Z}} \exp(-c^2 - d^2) \sum_{a, b \in \mathbb{Z}} \exp(-(a+cr)^2 - (b+dr)^2). $$ For each fixed $c, d$, the sum over $a, b$ is finite and indeed bounded independent of $c, d$. The overall contribution in this case is hence finite.

Now consider the second regime. You certainly need better estimates for $E_1(s)$ for $0 \leq s \leq 1$, since the two you used are not asymptotically sharp. We have $-E_1(s)/\log(s) \to 1$ as $s \to 0^+$, so we may as well replace your $E_1(s)$'s with $\log(s)$ in this regime. The contribution is hence essentially $$\sum_{\substack{x \in L_r \\ ||x||_1 \leq 1}} -\log(2||x||_1^2) \exp(||x||_1^2 - ||x||_2^2) \approx \sum_{\substack{x \in L_r \\ ||x||_1 \leq 1}} -\log(||x||_1^2) \exp(-||x||_2^2).$$ For each value of $c, d \in \mathbb{Z}$, there is going to be basically one value of $a, b \in \mathbb{Z}$ for which $||x||_1 \leq 1$, roughly $a = -\lfloor cr\rfloor$ and $b = -\lfloor dr\rfloor$. So, think of $a, b$ as functions of $c, d$. Heuristically, I'd expect $x_1, x_2$ to be uniformly distributed mod $1$. In that case, $||x||_1$ is roughly uniformly distributed. So the contribution is essentially $$-\sum_{c, d \in \mathbb{Z}} \exp(-c^2-d^2) \log(R^2)$$ where $R \sim U[0, 1]$. (This is certainly not rigorous.) Let $s = \sqrt{c^2 + d^2}$ and consider the contributions for $n \leq s \leq n+1$. There should be roughly $\pi(n+1)^2 - \pi n^2 \approx n$ pairs $(c, d)$ with such an $s$. They contribute essentially $-e^{-n^2} \sum_{i=1}^n \log(R_i)$ where $R_i$ is sampled uniformly on $[0, 1]$ to the sum. Now $\log(R_i)$ has finite non-zero expected value and variance, so the sum should be roughly normal with mean roughly like $n$ and variance roughly like $n^2$. These contributions will hence not be able to beat the $e^{-n^2}$ decay rate, so the sum will converge.