Suppose we have a function $g(x)$ and an integral
$$F(s)=\int_1^\infty \frac{g(x)}{x^{s+1}}dx$$
and $F(s)$ converges for $s>\beta$ and diverges for $s = s<\beta.$ Assume also that $\beta$ is a singular point of the integral defined by $ F(s)$ on the half-plane $\beta>0.$
By a well-known theorem of Landau if we had assumed $g(x)$ to be of constant sign for "sufficiently large x" we could find $\beta$ and it must be a singular point as above.
We also know that if we had assumed $g(x)$ to be of constant sign ($ \geq 0$) and found that $F(s)$ continued analytically on $s\geq \beta$ but diverged at the point $s=\beta$ we would have to conclude that our assumption of constant sign was incorrect.
My question is: if we do not know whether $g(x)$ is of constant sign but we find an abscissa of convergence $\beta$ which is a singular point of $F(s)$ can we conclude that $g(x)$ is of constant sign? Or do we only know that this situation is not inconsistent with constancy of sign?
No we cannot conclude $g$ is of constant sign; something like a simple (version of an) Estermann zeta $\Sigma\frac{(-1)^nd(2n+1)}{(2n+1)^s}$, where $d(n)$ is as usual the number of divisors of n, should have an alternating summatory function and a double pole at 1 which is the obvious abscissa of convergence by usual properties of $d$
Edit - the above example is not quite correct as the function is actually $(L(\chi_4)(s))^2$ where $\chi_4$ is the unique primitive character mod 4 ($1, 4k+1, -1, 4k+3, 0, 2k$) and that actually converges up to abscissa $1/2$ (general result in Dirichlet series theory) and possibly to $1/4$ (that's a conjecture though in line with generalized Lindelof and exponent pairs conjectures) though I am fairly sure the sum of coefficients is oscillatory.
The real Estermann example with double pole was $\Sigma\frac{(-1)^nd(2n)}{(2n)^s}$ and unfortunately here the sum of coefficients is positive since $\sum_{\le x}d(4n)$~$\frac{x}{2}\log x$, while $\sum_{\le x}d(4n+2)$~$\frac{x}{4}\log x$ as it's easily seen from the usual sum and manipulations (can give details if needed but it is not relevant to problem at hand)
To actually construct counterexamples we need to use a Tauberian theorem - the weak version due to Landau and appearing in Hardy's book on Dirichlet series from 1915 needs $na_n\log n \to 0$, while a stronger version due to Riesz (original paper 1916 in German, but referenced in Korevaar reference book on Tauberian theory) says that $na_n \to 0$ is enough for the following:
Let $f(s)=\sum_{n\geq 1}{\frac{a_n}{n^s}}$ with coefficients $a_n$ satisfying one of the Tauberian conditions above (both imply the Dirichlet series converges absolutely for $\Re(s)>0$ and you can shift them using $b_n=na_n$ if you want the more familiar $\Re(s)>1$). then if $f(s) \to a$ when $s \to 0$ through positive values, then $\sum{a_n} \to a$
In particular if $f$ extends to a continuos (so much weaker than analytic) function at $0$, the sequence $\sum{a_n} \to f(0)$; so if we construct a Dirichlet series with oscillating summatory function and satisfying the Tauberian condition above, but for which $\sum{a_n}$ diverges we are done as the function must have a singularity at $0$; for coefficients we will take $\pm \frac{1}{n\log n}, \pm \frac{1}{n \log n \log \log n}, n\geq 3$ say for Riesz, respectively Landau as they clearly satisfy the respective Tauberian condition, while $\Sigma |a_n| = \infty$, so we can choose signs to make the summatory function oscillating (put lots of consecutive plusses to make the sum of coefficients greater than 1, than put all minuses until the sum dips below -2, put plusses until sum goes above 3 etc etc - the usual argument which works because the series diverges absolutely so at any step we can continue with enough consecutive terms of the same sign to make it as high or as low as we need); by construction $\Sigma a_n$ diverges so the corresponding Dirichlet series $f(s)=\sum_{n\geq 1}{\frac{a_n}{n^s}}$, must have a singularity at $0$ while the summatory function oscillates to $\pm \infty$