Consider the following random process starting at time $t = 1$ with a coin such that $\Pr(\text{Heads}) = p_1$. The bias of the coin, $p_t$, evolves with time as follows for $t \geq 1$:
At time $t$, the coin is tossed (it has $\Pr(\text{Heads}) = p_t$). Let $X_t$ denote the outcome of the toss. \begin{align} p_{t+1} = \begin{cases} \max\{p_t - \Delta_1, 0\} & \text{ if } X_t = \text{Heads}, \\ \min\{p_t + \Delta_2, 1\} & \text{ if } X_t = \text{Tails},\end{cases} \end{align} where $0 < \Delta_1 < \Delta_2$ are some small known constants. Thus, the bias of the coin evolves with time such that it always attempts to make the coin a fair one based on the observed outcome of a toss.
I am interested in bounding the following expression: $\displaystyle \Pr\left( \bigcup_{t = 1}^T \{p_t < \epsilon\} \right)$ for some $\epsilon > 0$. The value of $T$ is known and $\Delta_1$ and $\Delta_2$ are small constants, roughly $O(1/T)$, so that the $p_t$ does not tend to saturate to extreme values often.
Intuitively, this probability should be small because of the "corrective" nature of the underlying random process. However, I am unable to obtain an elegant upper bound on the same. The union bound is resulting in a very weak bound. I have also tried some Martingale based approaches but haven't been to find a concrete answer.
Any hints or references will be highly appreciated. Thank you.
This is too long to be a comment, and it will necessarily be incomplete, but I'll start writing up some of what I know whenever I get a chance. (I apologize in advance that I'll edit this post a few times, but I won't have a chance to write it all in one sitting.)
In terms of references, I had a hard time finding anyone else who had studied this problem, because the literature uses different terminology than I would have. I searched for random walks, biased coin tosses, avalanche processes, and branching processes. But the main results in this field are from Karlin and McGregor studying "birth-death processes" in the 1950s.
They study a case where the probability of births/deaths is linearly biased, which is similar to our problem, but I don't think it's exactly the same, because the population is not bounded from above. It's been a long time since I read these papers, so I could have misremembered something. The best short summary that I remember reading is in the following book, under the topic "random walk polynomials": Recurrence Relations, Continued Fractions and Orthogonal Polynomials by Richard Askey and Mourad Ismail.
As you've already seen and I will make explicit below, the problem is equivalent to a random walk where the probability of moving left/right is linearly biased with respect to the current position. (I have jokingly called this "a drunkard in a valley," since the drunkard is more likely to move downhill toward the center of the valley.)
There are a number of famous results that apply to random walks with arbitrary move coefficients. For example, there is a series representation due to Touchard. There is also an extremely easy way to represent the time generating function of the first-return time as a continued fraction, and there is a method for handling such continued fractions due to Flajolet from around 1990. But I wasn't able to use any of these to get something useful out of the problem. I was successful using the poor physicist's standard trick of applying transforms. I suspect that I don't have enough to solve your problem but definitely enough to get started.
Most of my existing results are in my office, where I can't access them right now because of the pandemic. But I have enough at home that I should be able to give the really basic results.
My formulation of the problem is that there is a random walk with position $X(t)$ where the probability of moving left/right when at position $x$ is $$ \begin{align} \ell_x &= 1 - \frac{x}{N} \\ r_x &= \frac{x}{N} \end{align} $$ where $N$ is a positive integer.
Let $p_{x,t}^s$ be the probability that $X(t)=x$ given that $X(0)=s$ (the "start" position). Then $p_{x,t}^s$ satisfies $$ p_{x,t}^s = (1 - \delta_{x,0}) r_{x-1} p_{x-1,t-1}^s + (1 - \delta_{x,N}) \ell_{x+1} p_{x+1,t-1}^s + \delta_{x,s} \delta_{t,0} $$ where $\delta$ is the Kronecker delta. The first term represents a walk moving right from $x-1$ to $x$. The second is a walk moving left from $x+1$ to $x$. The third is the initial condition.
Let $P_x^s(z)$ be the generating function of $P_{x,t}^s$ with respect to $t$, i.e., $$ P_x^s(z) = \sum_{t=0}^\infty p_{x,t}^s z^t $$ Then one can show from the time-evolution equation of $p$ that $$ P_x^s(z) = (1 - \delta_{x,0}) z r_{x-1} P_{x-1}^s(z) + (1 - \delta_{x,N}) z \ell_{x+1} P_{x+1}^s(z) $$ Now let $P^s(u, z)$ be the generating function of $P^s(z)$ with respect to $x$, i.e. $$ P^s(u, z) = \sum_{x=0}^N P_x^s(z) u^x $$ The time-evolution equation becomes a differential equation in $P^s(u,z)$ in $u$: $$ \frac{z}{N} (u^2 - 1) \frac{dP^s}{du} + (1 - uz) P^s = u^s $$
This is a linear first-order inhomogeneous differential equation, so the solution is straightforward. The homogeneous part has solution $$ \begin{align} H(u, z) &= \exp\left( \frac{N}{z} \int \frac{1 - uz}{1 - u^2} du \right) \\ &= C_s(z) (1 - u)^{-\alpha(z)} (1 + u)^{\beta(z)} \end{align} $$ where $\alpha(z) = \frac{N}{2z}(1-z)$ and $\beta(z) = \frac{N}{2z}(1+z)$. $C_s(z)$ is an integration constant that can depend on $z$ since the integration is with respect to $u$. The integral is easy (just decompose into partial fractions).
The inhomogeneous part can be written as an integral in the usual way. The closed form solution is an Appell $F_1$ function. (The full expression is in my office notes.) But in many applications, including mine but possibly not yours, there is one really important thing to observe. The inhomogeneous solution must be $O(u^{s+1})$. You can see this by writing the solution $P^s(u, z) = u^{s+1} f(u, z)$ and then show that $f(0, z)$ is a constant.
This has a hugely important corollary: For $x \le s$, the homogeneous part is the entire solution. I'll use this below. (Everything below is based on the assumption that $x \le s$.) Moreover, if you need to consider $x > s$, then you can note that the original problem has a symmetry where the substitutions $x \to N - x$ and $s \to N - s$ do not affect the probabilities. So you can essentially figure out quite a lot about the probabilities without ever calculating the inhomogeneous solution.
On the other hand, there is still $C_s(z)$. There isn't an obvious boundary condition to determine $C_s(z)$ without the full inhomogeneous solution. (For example, we know that $P^s(1,1)=\frac{1}{1-z}$, but this involves terms where $x>s$.) It turns out that $C_s(z)$ is closely related to the inhomogeneous solution. I believe that the relationship is something like $C_s(z)=-I_s(1,z)$, where $I_s(u,z)$ is the inhomogeneous solution. This is based off of my memory, but I think that there is a simple argument to show this based on requiring $I_s(u,z)$ to be well-behaved near $u=1$. This means that you can derive a lot of information about the ratios of probabilities at different values of $x$, but you can't get the absolute probabilities unless you solve the inhomogeneous problem.
With all this in mind, we can start inverting the generating functions. The series for $H(u,z)$ in $u$ is $$ \begin{align} H(u,z) &= C_s(z) \sum(1 - u)^{-\alpha(z)} (1 + u)^{\beta(z)} \\ &= C_s(z) \sum_{j,k} (-1)^j \binom{-\alpha}{j} \binom{\beta}{k} u^{j+k} \\ &= C_s(z) \sum_x u^x \sum_j (-1)^j \binom{-\alpha}{j} \binom{\beta}{x-j} \\ &= C_s(z) \sum_x u^x \binom{\beta}{x} {_2}F_1\left(\begin{matrix} \alpha, -x \\ \beta - x + 1 \end{matrix}; -1 \right) \end{align} $$ where ${_2}F_1$ is the Gauss hypergeometric function. So for $x \le s$ then we have $$ P^s_x(z) = C_s(z) \binom{\beta}{x} {_2}F_1\left(\begin{matrix} \alpha, -x \\ \beta - x + 1 \end{matrix}; -1 \right) $$
This may be the end of the results that I have that are useful to you. I'm not really sure. I'm happy to keep discussing here when you have thought about it. I am mainly interested in deriving simple numerical approximations for the first-passage time to $x=s-1$, which is related to the ratio $P^s_s(z)/P^s_{s-1}(z)$. If this is useful I can probably provide some more information. But you may need absolute probabilities near $x=0$ or $x=N$, which is something I haven't devoted a lot of time to thinking about this.
One more piece of advice. The hypergeometric representation above is hard to invert, but it is very versatile and you can extract a lot of information from it. (There are a bunch of equivalent hypergeometric representations using Euler and Pfaff transforms, and of course there are also integral representations.) And there are other approaches to the same problem, like the orthogonal polynomial representation of Karlin and McGregor mentioned above, that are closely related.
Disclaimer: All of the above is based on very messy notes where I had re-derived some of the main expressions at home, and there may be typos, so some of it may need fixing. Please let me know if you find errors.