How can I show that this isn't a counterexample to the Riemann Hypothesis?

1.3k Views Asked by At

I was joking with my friend about disproving the Riemann Hypothesis and I randomly gave him this number: $$ z_? = \frac13 + 5466322356764788987534453212467843688257746873357895.6356798776664333i $$

He plugged it into Mathematica and got $\zeta(z_?)=0.000000000445763 + 0.000000011155i$. This was remarkably close to zero for a completely random choice. To investigate, we ran a small program that calculated $\min_{k\in S}\zeta(z_?+ik\epsilon)$ where $\epsilon=10^{-8}$ and $S$ was some "large" subset of $\mathbb{Z}$ (namely $[-10^{10},\ 10^{10}]\cap\mathbb{Z}$). After about 10 minutes of computations, Mathematica spit out this:

$$ \zeta(z_!) = 0.\times10^{-33}+0.\times10^{-33}i \\ z_! = \frac13 + 5466322356764788987534453212467843688237746873357395. 635356798779776664433i $$

Now I know $z_!$ wouldn't be an exact value for the root, but that's damn close if it is.

I REFUSE to believe I randomly guessed a counterexample to RH. I simply REFUSE. How can I prove that this isn't a counterexample? Can I use the asymptotics of the zeta function?

Edit: Something that made me suspicious is that the computation only took 10 minutes. I don't know what the implementation of the zeta fuction is in Mathematica, but doing 10 billion computations of the zeta function for an argument that large with high enough accuracy seems like it should take more than 10 minutes, which made me think this is purely a floating-point error, or a rounding error, or something non-analytic.

1

There are 1 best solutions below

2
On BEST ANSWER

Most of computing algorithms depend on series expansions which could be represented by conditional loops statements and bit shifting operations. You can write your own program to calculate Riemann-Zeta function inside the critical strip depending on Dirichlet-Eta function. $$ \begin{align} & \eta(s)=\left(1-2^{1-s}\right)\zeta(s)=-\sum_{n=1}^{\infty}\frac{(-1)^n}{n^s} \\[4mm] & s=a-i\,b \Rightarrow \frac{1}{n^s}=\frac{n^{i\,b}}{n^a}=\frac{e^{i\,b\log n}}{e^{a\log n}}=\frac{\cos(b\log n)+i\,\sin(b\log n)}{\exp(a\log n)} \Rightarrow \\[4mm] & \color{blue}{-\eta(a-i\,b)=\sum_{n=1}^{\infty}(-1)^n\frac{\cos(b\log n)}{\exp(a\log n)}+i\,\sum_{n=1}^{\infty}(-1)^n\frac{\sin(b\log n)}{\exp(a\log n)}} \\[4mm] & \text{Where}\colon \\ & \qquad \log n=-\log(1/n)=-\log\left(1-(1-1/n)\right)=\sum_{m=1}^{\infty}\frac{(1-1/n)^m}{m}=\sum_{m=1}^{\infty}\frac{(n-1)^m}{m\,n^m} \\ & \qquad \exp(-a\log n)=\sum_{m=0}^{\infty}(-1)^m\frac{(a\log n)^m}{m!} \\ & \qquad \cos(b\log n)=\sum_{m=0}^{\infty}(-1)^m\frac{(b\log n)^{2m}}{(2m)!} \\ & \qquad \sin(b\log n)=\sum_{m=0}^{\infty}(-1)^m\frac{(b\log n)^{2m+1}}{(2m+1)!} \\ & \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\color{white}{\text{.}} \\ \end{align} $$ Separate functions could be written to calculate the functions {Log(x), Exp(x), Cos(x), Sin(x)} using conditional loops that break at a desired error tolerance.
As well as, if you wish to reduce the floating point error and increase the accuracy, you could re-define the basic operations {Add($x + y$), Sub($x - y$), Mul($x \times y$), Div($x \div y$)} with new functions of your own, using the bit shifting operations over concatenated variables. This will give you the ability to do calculation with much more than 64 bit.

By the way, unfortunately, calculating the Dirichlet-Eta real and imaginary parts using above series for the questioned value, show values that do not equal zeros. (sorry!)