Assume we're doing multiple hypothesis testing. That means we have a number of possible ground truth distributions $P_1,..., P_m$ (assume discrete, for convenience), and we're drawing a dataset $X$ of size $n$ from one of them (say $P_V$, where $V$ is a latent variable with a uniform prior), and we have a tester $\Psi$ such that $\mathbb{P}[\Psi(X) = v | V = v] \geq 1-\beta$. By the law of total probability, the previous implies that $\mathbb{P}[\Psi(X) = v] \geq \frac{1 - \beta}{m}$.
Now, let $x_{< i}$ be any realizable sequence for the first $i - 1$ samples (that is, there exists at least one $P_v$ that generates $x_{< i}$ with positive probability). What can we say about $\mathbb{P}[\Psi(X) = v | X_{< i} = x_{< i}]$, if we assume that $v$ is an output that's part of the support of $\Psi(X) | X_{< i} = x_{< i}$ (= the probability that we observe this output is $> 0$). Is there a non-trivial lower bound for this quantity, similar to the one we have without the conditioning?