I consider the following update rule for a random walk $\{X_{t}\}_{t \geq 0}$ on the hypercube $\{0,1\}^{n}$:
At each time step, I sample $I \in \{1,2,\ldots,n\}$ uniformly at random and $U \sim Bernoulli(p_{n})$, then I update $X_{t+1}[I] = U$ (and keep $X_{t+1}[i] = X_{t}[i]$ for $i \neq I$).
In the special case that $p_{n} \equiv 0.5$, this is intended to be the "usual" nearest-neighbour simple random walk on the hypercube $\{0,1\}^{n}$.
I recall two well-known facts about this walk, both of which are proved in e.g. Chapter 5 of the book of David Aldous and James Fill:
The expected hitting time of $(0,0,0,...,0)$ from $(1,1,...,1)$ is on the order of 1 divided by the stationary measure of the point $(1,1,...,1)$ (in particular, it is much smaller than the naive $O(2^{n} n \log(n))$ bound one gets by just thinking about mixing times).
If you start at $(0,0,...,0)$, the probability that the return time is "large" (e.g. bigger than $n^{2}$) is bounded away from 0 uniformly in $n$.
I would like to know: roughly how far from $0$ or $1$ does $p_{n}$ have to be for these facts to remain true? My guess is that the answer to this is well-known, but I don't know a nice reference or a "nice" way to grind through the calculations.
Note that if e.g. $p_{n} = n^{-3}$, then both of these fail: the expected hitting time is about $n$ (which is much bigger than 1 over the stationary measure) and the return time is almost always $O(1)$ steps (which is much smaller than $n^{2}$).