The problem statement is:
2 players roll a 20-sided die. What is the probability that player A rolls a larger number if player B is allowed to re-roll a single time?
The question is a bit ambiguous, but I am going to operate on the following 2 assumptions:
(a) Player B doesn't know what player A rolls when deciding whether or not to reroll.
(b) If player B re-rolls, his first roll is discarded. In other words, when comparing player A's roll to player B's roll, only the last roll of player B is taken into account.
(c) Player B doesn't want player A to win, so it will play optimally.
I solved this problem, but it seems my solution does not match the answer given , which is $\frac{1}{4}$. Below is my solution process.
I know the following:
(1) Probability that A rolls a larger number if player B isn't allowed to re-roll. The probability that they roll the same number is $\frac{20}{400}$. The probability that player A rolls a larger number is thus $\frac{190}{400} = \frac{19}{40}$.
(2) How does player B decide if they should toss again? It's obvious to me that he should toss again if the first toss is $\leq 10$. If he tosses $> 10$, he should not toss again. So with probability $0.5$, he will get an expected value of $15.5$, and with probability $0.5$, he will toss again and get expected value $10.5.$
His expected outcome toss when considering that he can re-roll is thus $$ E[B] = 0.5 \cdot 15.5 + 0.5 \cdot 10.5 = 13 $$
2.5 higher than the case where he isn't allowed to re-roll. Seems reasonable...
I found the threshold of $b = 10$ (where $b$ is the largest value on the first toss at which player B decides to do a second toss) by intuition, but we could have formulated an optimization problem $$ \arg \max_b \frac{20-b}{20} \frac{20 + b + 1}{2} + \frac{b}{20} 10.5 $$
and solved for $b$ that maximizes $E[B]$.
Then I define disjoint events to be $B$ deciding to retoss (denote as $RR$) and $B$ deciding not retoss (denote as $NR$). Then we can write $$ P(A > B) = P(A > B | RR) P(RR) + P(A > B | NR) P(NR) $$
Previously we saw that $P(RR) = P(NR) = 0.5$.
For $P(A > B | RR)$, where player $B$ retosses, I believe the probability that I computed in (1) is the same as the conditional probability $P(A > B | RR)$, i.e., $P(A > B | RR) = \frac{19}{40}$. I think this is true because the die tosses are IID and memoryless. So that when $B$ retosses, we can treat this case as simply both $A$ and $B$ tossing a single time.
For $P(A > B | NR)$, when we condition on $NR$, i.e., $B$ stopping on the first toss, then this means that $B$ rolled a $11, 12, \ldots, 20$. There are $20 \cdot 10$ possibly outcomes for $(A,B)$ conditioned on $NR$. $9 + 8 + \ldots 1 = 45$ of these outcomes are such that $A > B$. So $P(A > B | NR) = \frac{45}{200} = \frac{9}{40}$
So $P(A > B) = \frac{19}{80} + \frac{9}{80} = \frac{28}{80} = \frac{7}{20}$ for the case where $B$ is allowed to re-toss. This is only $\frac{1}{8}$ less than the case where $B$ isn't allowed to re-toss. This seems reasonable.
I don't think I made a mistake in my solution, but it doesn't match $\frac{1}{4}$.
What you have done looks right (and I've checked the calculations and get the same answer). In particular, if B doesn't know the result of A's roll, it is correct to reroll on 10 or below, and keep the original roll on 11 or above, since if B keeps a roll of $r$ the chance of B winning is $r/20$, whereas if B rerolls the chance of winning is $21/40$.
The value of $1/4$ cannot possibly be correct. Even if we give B every possible advantage, by letting them pick the higher of two rolls rather than having to choose before seeing the second (and assuming that A has to get strictly higher to win), A wins more than $1/4$ of the time. This is because if all three rolls are different, A wins with probability $1/3$, and all three rolls are different with probability $\frac{19}{20}\times\frac{18}{20}$, so A's chance of winning must be greater than $\frac{19}{20}\times\frac{18}{20}\times\frac13=0.285$. (In fact the exact value with these assumptions would be $\frac{247}{800}$.)