As in the title, I'm in quest for $\sum_{n=1}^\infty \log(n)\cdot x^n$, where $0 \le x \lt 1$
Wolfram Alpha says: $-\operatorname{PolyLog}^{(1, 0)}(0, x)$, but I don't understand what that means. (Of course, PolyLog stays for the polylogarithm).
Background
It's about "how many bits I need to encode a real number $0 < r < 1$ with a tolerance $\delta/2$? The "naive" response is $-log_2(\delta)$.
Nevertheless (long story short) I need a different approach:
I can encode every positive integer $n$ with approximately $C\cdot\log(n)$ bits
Let $0 < x_i < 1$ be a pseudo-random sequence, and let $N$ be the 1st index so that $r-\delta/2 <x_N< r+\delta/2$. Then let's say that we can transmit $r$ via $N$ (with the tolerance $\delta$). So we need $C\cdot\log(N)$ bits...
But then I need the expected value $E(C\cdot\log(N)) = \sum_{n=1}^\infty C\cdot\log(n)\cdot\delta\cdot(1-\delta)^{n-1}$ $=C\cdot{\delta\over1-\delta} \sum_{n=1}^\infty \log(n) \cdot (1-\delta)^n$
Definition of Polylogarithm: http://mathworld.wolfram.com/Polylogarithm.html
No closed form exists in terms of elementary functions (addition, multiplication, powers, etc.), at least not in terms of real functions. You might be able to write it as a complex-valued function or improper integral.
Given that the polylogarithm is already a special function, I suspect that any closed form will be in terms of special functions rather than something nice.