I am given two randomized algorithms $f$ and $g$, that take as input a positive integer $m$ and produce a random non-negative integer. The algorithms are the same except that $g$ receives an small "advantage" with a very low probability.
I need to prove or disprove that as $m$ grows, the two algorithms tend to behave the same. More concretely, if $h(m):=g(m)-f(m)$, to show or refute that
$$\lim_{m\to\infty} E[h(m)] = 0$$
But I'm stuck because the algorithm is a complex stochastic process.
What methodologies can be used to formally prove that the hypothesis holds?
Attachments:
This is the algorithm. Arrays are 0-indexed, randint is inclusive and [0]*m means an array of m zeroes.
def algorithm(m, advantage):
h = [0] * m
i = 1
while i < m:
if advantage and randint(0, m-1) == 0:
prev = h[i-1]
else:
prev = h[i-1] - 1
j = randint(0, i-1)
h[i] = max(prev, h[j]+1)
i += 1
return h[m-1]
def f(m): return algorithm(m, advantage=False)
def g(m): return algorithm(m, advantage=True)
def h(m): return g(m) - f(m)
And the plots of some Monte Carlo simulation plots, which show that the hypothesis might hold:


Each time the advantage makes a difference, it increases the value of some element by $1$. Let us denote by $e_i$ the expected total number of such increases when calculating $h[i]$. Clearly $e_0 = 0$, and we can upper bound $$ e_i \leq \frac{1}{m} + \frac{e_0 + \cdots + e_{i-1}}{i}. $$ The solution to this recurrence is $e_i = O\bigl(\frac{\log i}{m}\bigr)$ (see below). In particular, $e_{m-1} = O\bigl(\frac{\log m}{m}\bigr)$ is an upper bound on the expected advantage in value.
Let us now solve the recurrence $$ x_i = 1 + \frac{x_0 + \cdots + x_{i-1}}{i}, $$ with initial value $x_0 = 0$.
Roughly speaking, $x_i$ counts the expected number of steps it takes to get back to $0$; since each step roughly halves the value, the number of steps behaves like $\log i$.
We can compute $x_i$ exactly in various ways. Let $y_i = ix_i$. Then $$ y_{i+1} - y_i = (i + 1) + (x_0 + \cdots + x_i) - i - (x_0 + \cdots + x_{i-1}) = 1 + x_i = 1 + \frac{y_i}{i}, $$ hence $$ y_{i+1} = 1 + \frac{i+1}{i} y_i. $$ Unrolling this recurrence gives $$ y_i = 1 + \frac{i}{i-1} y_{i-1} = 1 + \frac{i}{i-1} + \frac{i}{i-2} y_{i-2} = \cdots = \frac{i}{i} + \frac{i}{i-1} + \cdots + \frac{i}{1} y_1. $$ Since $y_1 = x_1 = 1$, we deduce that $$ x_i = \frac{y_i}{i} = \frac{1}{i} + \cdots + \frac{1}{1} = H_i \approx \log i. $$