Let $p,\mu_c \in [0,1]$. Let $(X_i)_{i \in \mathbb{N}}$ be a sequence of i.i.d random variables, each following a Bernoulli distribution $\mathcal{B}(p)$. For $n \in \mathbb{N}$, let $ \mu_n = \frac{1}{n+1} \sum_{i=0}^n X_i $ and $A_n$ the event: "$\mu_n \leq \mu_c$". I am looking for a way to compute $$ \mathbb{P} \left(\bigcap_{n \in \mathbb{N}} A_n \right), $$ that is, the probability that the mean of the sequence never exceeds the critical value $\mu_c$.
You can assume $\mu_c > p$, since this probability is $0$ when $\mu_c \leq p$.
I first wanted to compute it using a martingale, but the optional stopping theorem does not seem to help. I don't know if computing it "by hand" (combinatorially) is possible, but it would be quite unsatisfying anyway. Eventually, it may be somehow linked to this post, but I don't understand Brownian motion well enough to use it correctly.
Thank you for any advice!
Edit: When $p = \frac{1}{2}$ and $\mu_c = \frac{3}{4}$, simulations give $$\mathbb{P} \left(\bigcap_{n \leq 1000} A_n \right) \approx 0.456.$$ I think that this is a very good approximation of $\mathbb{P} \left(\bigcap_{n \in \mathbb{N}} A_n \right)$ (going up to 2000 does not affect the result). This fact may be used to check any suggested answer. I find it surprisingly high, since $\mathbb{P}(A_0)$ alone is equal to $\frac{1}{2}$.
Let $f_{p, \mu_c}(s, n)$ be the probability of never exceeding $\mu_c$ given a sum of $s$ after $n$ iterations. The answer would then be $f_{p, \mu_c}(0, 0)$. The average will be $\frac{s}{n}$, so clearly $$f_{p, \mu_c}(s, n)=0$$ if $s/n > \mu_c \to s-\mu_cn > 0$
If this condition is not satisfied, then $$f_{p, \mu_c}(s, n) = pf_{p, \mu_c}(s+1, n+1) + (1-p)f_{p, \mu_c}(s, n+1)$$
since there is a $p$ probability that $s$ will increase $1$ and a $1-p$ probability that $s$ will stay the same.
This can instead be rewritten with one variable instead of two in the argument. Let $c = s-\mu_cn$. Then $$P_{p, \mu_c}(c) = pP_{p, \mu_c}(c+1-\mu_c) + (1-p)P_{p, \mu_c}(c-\mu_c)$$
This Python code implements this in order to calculate the probability. You can decrease
epsin order to make it more accurate. Witheps = 1e-8, $P(1/2, 3/4) = 0.4565$ is obtained.Edit: I just thought of another approach. Let $L$ be a sequence of length $n$ of $0$s and $1$s. Let $$m(L) = \max_{k=1...n} \frac{1}{k} \sum_{i=1}^k L_i$$ be the maximum average value of the first $k$ elements. For example $m(\{0, 1, 0, 0, 1\}) = \frac{1}{2}$ choosing $k = 2$.
The answer is then $$\lim_{n \to \infty} \sum_{L_n, m(L) \le \mu_c} p^{\sum L} (1-p)^{n-\sum L}$$ where the sum is over all $L$ of length $n$ such that $m(L) \le \mu_c$ and $\sum L$ is the number of $1$s in $L$.
When $p = 1/2$, this is easier to compute, as it can be simplified to $$\lim_{n \to \infty} (1/2)^{n} \underbrace{\sum_{L_n \cap m(L) \le \mu_c}1}_{S_n}$$
With $\mu_c = 3/4$, as in the example in the question, it seems that $S_n = 2S_{n-1}$ unless $n \equiv 1 \pmod 4$, in which case $S_n = 2S_{n-1} - \frac{\binom{4n}{n}}{3n+1}$. Using this, I get that $$P_{1/2, 3/4} = 1 - \frac{1}{2} \ _3F_2(\frac{1}{4}, \frac{1}{2}, \frac{3}{4}; \frac{2}{3}, \frac{4}{3}; \frac{16}{27}) \approx 0.456$$ The hypergeometric function is equal to A086253. However, I am not $100\%$ sure about the relation between $S_n$, so this is just an educated guess for now. In a similar way, I guess that $$P_{1/2, 2/3} = \frac{-1 + \sqrt{5}}{4}$$
Edit 2: Let $M$ be the sorted list of the locations of the $1$s in $L$. Let $$h(M) = \max_{i=1...k} \frac{i}{M_i}$$ Then this is equivalent to $$\lim_{n \to \infty} \sum_{k=0}^{n} p^{k} (1-p)^{n-k} \sum_{M_{k, n}\cap h(M) \le \mu_c} 1$$ where $M_{k, n}$ is all $\binom{n}{k}$ combinations from $1...n$ of length $k$. For example, $M_{2, 3} = \{\{1, 2 \}, \{1, 3\}, \{2, 3\}\}$. Finding the distribution of $h(M)$ for all $M$ in $M_{k, n}$ is now sufficient for the answer. The Python code for this is included.