For a positive integer $n$, let $p_n$ be the value of $x \in (0, 1)$ that minimizes $f(x) = \sum_{j=1}^n \binom{n}{j} \frac{1}{1-(1-2x)^j}$ on the stated interval. What is $p_n$?
The function $1/(1-(1-2x)^j)$ is decreasing at least on the interval $(0, 1/2)$. Hence, $p_n \in [1/2, 1)$ other than $p_1$ since $p_1 = 1$. Also, if $j$ is odd, then it is decreasing on $(0, 1)$, but if $j$ is even, then it is increasing on $(1/2, 1)$.
I would be very interested in at least answering the three questions below.
Background to the question
Let $G$ be the abelian group $\mathbb{Z}_2^n$, and let $g \in G$. Then a simple consequence of Theorem 3.3 of this paper on evolutionary algorithms, is that $f(x)$ equals the hitting time of $g$ for the random walk below that starts from a uniform distribution on $G$.
The random walk $X_0, X_1,\ldots$ is defined as follows: Let $X_0$ be uniformly distributed on $G$. Given $X_i \in G$, flip each bit independently with probability $x$. Equivalently, go from $X_i$ to $X_i+w$ with probability $\mu(w)$, where $\mu$ is a probability on $G$ defined by $\mu(w) = x^{|w|}(1 - x)^{\ell -|w|}$, where $|w|$ is the number of 1's in $w$ when viewed as an element of $\{0, 1\}^n$.
The hitting time of $g$ is defined as $\min \{ t | X_t = g \}$.
Three questions
Both my intuition on this problem as well as the results I found doing some programming suggest that the answers to the following questions are all "yes."
Is $p_n > 1/2$ for all $n$?
Is $\lim_{n \to \infty} p_n = 1/2?$
Does $p_n$ decrease monotonically?
But I haven't proven that the answers to these questions are what I think they are.
In my former research group, we worked the problem of the zero of almost $f'(x)$. So, using our results, let $x=\frac {1-\epsilon}2$ and look for the zero of $$g_n(\epsilon)=\sum_{j=1}^n j \binom{n}{j}\frac{ \epsilon ^{j-1}}{\left(1-\epsilon^j\right)^2}$$ which, at least for me, is better conditioned and clearly shows what happens if $\epsilon \to 0^{\pm}$.
Some results $$\left( \begin{array}{cc} n & \epsilon_n \\ 10 & -0.134716881 \\ 20 & -0.086462117 \\ 30 & -0.066032636 \\ 40 & -0.054260339 \\ 50 & -0.046456858 \\ 60 & -0.040843618 \\ 70 & -0.036581901 \\ 80 & -0.033219461 \\ 90 & -0.030488889 \\ 100 & -0.028221051 \\ 200 & -0.016758959 \\ 300 & -0.012240459 \\ 400 & -0.009757941 \\ 500 & -0.008168496 \\ 600 & -0.007055481 \\ 700 & -0.006228588 \\ 800 & -0.005587850 \\ 900 & -0.005075452 \\ 1000 & -0.004655514 \\ 2000 & -0.002618885 \\ 3000 & -0.001860867 \\ 4000 & -0.001457237 \\ 5000 & -0.001204184 \\ 6000 & -0.001029721 \\ 7000 & -0.000901681 \\ 8000 & -0.000803449 \\ 9000 & -0.000725546 \\ 10000 & -0.000662189 \\ \end{array} \right)$$
These points align along an hyperbole with an asymptotic value of $0^-$. So, $p_n \to \left(\frac{1}{2}\right)^+$.
If $\epsilon$ is small, the summand can be expanded as a series $$\frac{ \epsilon ^{j-1}}{\left(1-\epsilon^j\right)^2}=\frac 1 \epsilon \frac{ \epsilon ^{j}}{\left(1-\epsilon^j\right)^2}=\frac 1 \epsilon \sum_{k=1}^\infty k \, \epsilon^{kj}$$
Summing, expanding again
$$\frac {g_n(\epsilon)}n=1+\sum_{k=1}^\infty \frac {P_k(n)}{k!}\,\epsilon^k $$ where the first polynomials in $n$ are $$\left( \begin{array}{cc} 1 & n+1 \\ 2 & n^2-3 n+8 \\ 3 & n^3-6 n^2+23 n+6 \\ 4 & n^4-10 n^3+35 n^2-50 n+144 \\ 5 & n^5-15 n^4+85 n^3-105 n^2+274 n+480 \\ 6 & n^6-21 n^5+175 n^4-735 n^3+1624 n^2-1764 n+5760 \\ 7 & n^7-28 n^6+322 n^5-1960 n^4+8449 n^3-23212 n^2+51708 n+5040 \\ \end{array} \right)$$
Using series reversion does not give good approximation but reproduces the trends. Truncating to $k=10$, with $m=n+1$, leads to $$\epsilon=-\frac{7381}{2520 m}+\frac{2121533}{50400 m^2}-\frac{598739003}{907200 m^3}+\frac{113076059}{13440 m^4}+O\left(\frac{1}{m^5}\right)$$ but the coefficient depend on the value of $k_{\text{max}}$.
Much better is to perform a series expansion around $\epsilon=-\frac 1{n+1}$ to $O\left(\left(x+\frac{1}{n+1}\right)^6\right)$ and use a series reversion.
For the first values of $n$, some results $$\left( \begin{array}{cc} 10 & -0.134717\\ 20 & -0.086459\\ 30 & -0.066025\\ 40 & -0.054246\\ 50 & -0.046437\\ 60 & -0.040818\\ 70 & -0.036552\\ 80 & -0.033186\\ 90 & -0.030454\\ 100 & -0.028186\\ \end{array} \right)$$
Edit
The idea of series reversion was $\color{red}{\text{bad}}$.
Using $$\frac{ \epsilon ^{j-1}}{\left(1-\epsilon^j\right)^2}=\epsilon ^{j-1}+2 \epsilon ^{2 j-1}+O\left(\epsilon ^{3 j-1}\right)$$ we look for the inverse of the equation $$n=\frac{\log \left(-\frac{2 \epsilon (\epsilon +1)}{\epsilon ^2+1}\right)}{\log \left(\frac{\epsilon +1}{\epsilon ^2+1}\right)}$$ Expanding the rhs again, it gives $$n-1=\frac{\log (-2 \epsilon )}{\epsilon }+\frac{3}{2} \log (-2 \epsilon )+\frac{23}{12} \epsilon \log (-2 \epsilon )+O\left(\epsilon ^2 \log (-2 \epsilon )\right)$$
$$n-1\sim\frac{\log (-2 \epsilon )}{\epsilon }\quad \implies \quad \color{red}{ \epsilon\sim-\frac{W\left(\frac{n-1}{2}\right)}{n-1}}$$ which, for $n=10,20,\cdots$, gives $$\{-0.140804,-0.0901594,-0.0685292,-0.0560925,-0.047877\}$$