I am trying to understand the following implication :
If Eve is able to correctly guess, with a small probability, the whole output string $A=(A_1,A_2,\ldots , A_n)$. Using Bayes' rule one can then derive the following: $$ \begin{aligned} &\operatorname{Pr}\left(\text { Eve guesses } A_{1} \ldots A_{n} \text { correctly }\right) \geq \varepsilon \\ &\Rightarrow ~\exists i_{0} ~ s.t. ~ \operatorname{Pr}\left(\text { Eve guesses } A_{i_{0}} \mid\right. \text { guesses for } \\ &\left.A_{1}, \ldots, A_{i_{0}-1} \text { were correct }\right) \geq 1-\delta \end{aligned} $$ for some small $\delta>0$.
I am not getting how Bayes's rule is used to get this conclusion, i.e., If Eve can guess all strings with small probability then she can guess a particular element with high probability conditioned that the guess for previous elements was correct.
Can you please help me in understanding the above argument clearly.
It appeared in this paper or [1] (pg 6)
I don't have access to the full paper, however I have an idea what might be meant. I will assume that the statement is meant for large $n$.
By the definition of conditional probability, $$ P(\text{Eve guesses } A_{i}∣ \text{ guesses for } A_1,\dots,A_{i-1} \text{ were correct} ) = \frac{P(\text{Eve guesses } A_1,\dots,A_i \text{ correctly})}{P(\text{Eve guesses } A_1,\dots,A_{i-1} \text{ correctly})}. $$ Now denote by $B_i \triangleq P(\text{Eve guesses } A_1,\dots,A_i \text{ correctly})$ and assume to the contrary that for all $i$, it is true that $P(\text{Eve guesses } A_{i}∣ \text{ guesses for } A_1,\dots,A_{i-1} \text{ were correct} ) <1-\delta$. Then, $$\frac{B_i}{B_{i-1}} <1-\delta \tag{1}\label{1}$$ for all $i$. By definition, $P( \text{Eve guesses } A_1\dots,A_n \text{ correctly} ) = B_n$. Using \eqref{1} multiple times, we deduce that $B_i<(1-\delta)B_{i-1}<(1-\delta)^2B_{i-2}<\dots$ and thus $$ B_n<(1-\delta)^{n-1}. $$ However, on the other hand, we know that $P(B_n) \geq \epsilon$. Therefore, the contrary assumption leads to a contradiction, whenever $(1-\delta)^{(n-1)}<\epsilon$, or equivalently, $\delta>1-\epsilon^{1/(n-1)}$. And thus the statement is true for all $\delta > 1-\epsilon^{1/(n-1)}$, where the r.h.s is close to $0$ whenever $n$ is large.