From the identity $\ln\zeta(s) = \frac{q(1)}{1^{s}}+\frac{q(2)}{2^{s}}+\frac{q(3)}{3^{s}}+\frac{q(4)}{4^{s}}+\ldots$, where $q(n)=
\begin{cases}
\frac{1}{k} & \text {$n={p}^{k}, k \in \mathbb{N} $} \\
\text{0} & \text{otherwise} \\
\end{cases} $:
$$ \begin{pmatrix}
\frac{1}{1^2} & \frac{1}{2^2} & \frac{1}{3^2} & \frac{1}{4^2} & \cdots\\
\frac{1}{1^4} & \frac{1}{2^4} & \frac{1}{3^4} & \frac{1}{4^4}& \cdots\\
\frac{1}{1^6} & \frac{1}{2^6} & \frac{1}{3^6} & \frac{1}{4^6}& \cdots\\
\frac{1}{1^8} & \frac{1}{2^8} & \frac{1}{3^8} & \frac{1}{4^8}& \cdots\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix} \cdot
\begin{pmatrix}
q(1) \\
q(2) \\
q(3) \\
q(4) \\
q(5) \\
\vdots
\end{pmatrix}
= \begin{pmatrix}
\ln\zeta(2) \\
\ln\zeta(4) \\
\ln\zeta(6) \\
\ln\zeta(8) \\
\vdots
\end{pmatrix}
$$
let $A= \begin{pmatrix}
\frac{1}{1^2} & \frac{1}{2^2} & \frac{1}{3^2} & \frac{1}{4^2} & \cdots\\
\frac{1}{1^4} & \frac{1}{2^4} & \frac{1}{3^4} & \frac{1}{4^4}& \cdots\\
\frac{1}{1^6} & \frac{1}{2^6} & \frac{1}{3^6} & \frac{1}{4^6}& \cdots\\
\frac{1}{1^8} & \frac{1}{2^8} & \frac{1}{3^8} & \frac{1}{4^8}& \cdots\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix}, C=\begin{pmatrix}
\ln\zeta(2) \\
\ln\zeta(4) \\
\ln\zeta(6) \\
\ln\zeta(8) \\
\vdots
\end{pmatrix}$, then
$$\begin{pmatrix}
q(1) \\
q(2) \\
q(3) \\
q(4) \\
q(5) \\
\vdots
\end{pmatrix} = A^{-1} \cdot C$$
since $A^{-1}$ is just an inverse Vandermonde, and C is expressible by using Bernoulli numbers ($\ln\zeta(2n)=\ln\left|B_{2n}\right|+2n\ln 2\pi-\ln 2-\ln (2n)!$), we could theoretically get q(n) with only Bernoulli numbers as something more complex. Maybe we have even skipped the use of Riemann zeta zeros.
My question is, is there already something similar to this idea proposed?
P.S. the above is similar to what I posted about a year ago: Get prime number identifying function?
2026-02-22 20:40:16.1771792816
Derive prime-identifying functions from inverse Vandermonde and Bernoulli numbers
137 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
I'm extending on my comments.
To define an inverse of a vandermonde-matrix of infinite size it needs some special care; I don't think, that observation of the behave of inverses with increasing sizes is reliable - it might, however, give a hint for the perspective.
A better parh is in my view to find an analytical expression for the entries in the inverse (depending on r(ow) and c(olumn) index) if this is possible, and use that analytical description for the dot-product with the infinite vector of logarithm of zetas.
One possibility to do this is to look at the triangular and diagonal components of the LDU-decomposition of the Vandermondematrix; that components have exactly determinable inverses, except that in the inversion of the diagonal-matrix expressions like $1/0$ might occur.
Let us use your matrix $A$ and find the three matrices $L$,$D$,$U$ by ldu-decomposition for some finite size. We find the following top-left segments of the matrices $L$,$D$,$U$ one below the other:
There are some patterns visible here, but I did not really try to detect a set of formulae depending on r(ow) and c(olumn)-indexes so far.
Only we have by definition $$ A = L \cdot D \cdot U$$ for any finite dimension. And because the entries do not change when we change the size, we can assume, that this holds also for the infinite size.
Now we can easily find inverses for that triangular matrices, let's write $ K = L^{-1}, T=U^{-1}$. For the inverse of $D$ we need that no zero occurs on the diagonal - and the pattern of the entry suggests that no zero would occur, so we assume existence and compute also $C=D^{-1}$ with finite size.
The top-left segments of that matrices ($K$,$C$,$T$ below each other) look like this:
Now, to proceed, it is crucial to find analytical expressions for the entries in the matrices, so that we can explicate series-formulae for the dot-products of rows in $T$ with columns in $K$ multiplied by the diagonal-elements in $C$.
I've not yet really tried to find some meaningful pattern here. But doing the dot-products with increasing sizes of the matrices by numerical evaluation by Pari/GP-software seems to give zeros for all entries in $T \cdot C \cdot K$ ! So we seem to get a result that $$\lim_{\text{size} \to \infty} A^{-1} = \mathbb 0$$
Of course, using such a matrix as left-multiplicator for the column-vector $\Lambda$ containing the logarithms of the zetas would again result in a zero-vector and would thus be useless.
What we could try, is, to separate the dotproducts by changing the order of computation. Instead as suggested so far $$ (T \cdot C \cdot K) \cdot \Lambda = \mathbb 0 \cdot \Lambda = \mathbb 0\\ \tag {dismissed} $$ we do with different order of computation $$ T \cdot (C \cdot K \cdot \Lambda) = T \cdot Y = X \tag {proposed} $$
This gives for determination of $Y$ convergence with increasing size of the matrices involved to
and $ T \cdot Y = X$ seems to have as well convergent, but not zero, dot-products giving in the limit the expected "indicator" for prime-powers in $X$:
That values for the entries in $X$ are estimated by the results of multiplication of the matrices with increasing sizes.
I have written a short treatize with perhaps some better explanations on a very similar problem some years ago, where I also succeeded to get a proof for my version of the Vandermonde-matrix: that the "inverse" taken by this method indeed is zero because I could decode the systematic formulae for the entries in $K,C,T$ in that versions and state explicitely series-definitions for the involved dot-products. The proof for the zero-result in all entries of $A^{-1}$ was given in an MO-answer a couple of years ago.
See my own text here which does not yet include the MO-answer. A bit introduction is already on the index-page for my math-papers.
Couldn't resist; at least a partial decomposition of the entries in $T$, the numbers beside the systematic fractions are integer: