Note: this question has been rephrased so that the problem isn't affected by boundary issues, as suggested by mathworker21's comment. See below for the original question.
Let $k \in [0, 1]$ be a real number.
The game starts with a random choice of a sequence $X_1, X_2, \dotsc$ of real numbers in $[0, 1]$, which are not revealed to you. At any point in the game, you can either stop or show the next number of the sequence. The goal is to stop exactly before the sum of the shown numbers exceeds $k$.
For instance, at the beginning you can do either of the following:
- Stop without showing any number. If $X_1 \le k$, you lose. If $X_1 > k$, you win.
- Show $X_1$. If $X_1 > k$, you lose. If $X_1 \le k$, the game goes on, and you can do either of the following:
- Stop. If $X_1 + X_2 \le k$, you lose. If $X_1 + X_2 > k$, you win.
- Show $X_2$. If $X_1 + X_2 > k$, you lose. If $X_1 + X_2 \le k$, the game goes on.
For a given $k \in [0, 1]$, any strategy will have a certain probability $p$ of winning on any given sequence. For example, if $k = 0$, the best strategy has probability $p = 1$ of winning, because you win by stopping right away.
The question is:
Which $k$ minimizes the probability $p$ of winning?
In the original statement of the problem, there were only a finite number $n$ of random numbers.
For example, for $n = 2$, by showing $X_2$ you would have immediately won if $X_1 + X_2 \le k$, and lost otherwise. In that case, the probability of winning appears to be minimum if $k = \frac {\sqrt {10}} 2 - 1$, which I found by case analysis.
For $n \ge 3$, it seems unlikely that there is a simple expression of $k$ in terms of $n$.
The following is my own attempt at solving the problem after mathworker21's suggestions.
Let $P(k)$ be the probability of winning the game with threshold $k$ using the best strategy.
Notice that if $0 \le k \le \frac 1 2$, by stopping right away we have probability $p = 1 - k \ge \frac 1 2$ of winning and probability $q = k \le \frac 1 2$ of losing, so we should stop right away.
In fact, there must be some $\alpha \ge \frac 1 2$ such that the best strategy will stop right away if $0 \le k \le \alpha$ for some $\alpha$, and show $X_1$ otherwise.
If $0 \le k \le \alpha$, the probability of winning is $p = 1 - k$.
If $\alpha < k \le 1$, we show $X_1 = x$. Then:
Thus the probability of winning is: $$p = \int_0^k P(k - x) \, dx + \int_k^1 0 \, dx = \int_0^k P(x) \, dx$$
Therefore we can write: $$P(k) = \begin{cases} 1 - k & \text{if } 0 \le k \le \alpha \\ \int_0^k P(x) \, dx & \text{if } \alpha < k \le 1 \end{cases}$$
Since $P$ must be continuous in $\alpha$, we must have $1 - \alpha = \int_0^\alpha P(x) \, dx$. The integral is: $$\int_0^\alpha P(x) \, dx = \int_0^\alpha (1 - x) \, dx = -\frac {\alpha^2} 2 + \alpha$$ Therefore $1 - \alpha = -\frac {\alpha^2} 2 + \alpha$, which implies $\alpha = 2 - \sqrt 2$.
Now, if $\alpha < k < 1$, then $P'(k) = P(k)$, so $P(k) = c e^k$ for some $c \in \mathbb R$. Again, since $P$ is continuous in $\alpha$, we have that $1 - \alpha = c e^\alpha$, which implies $c = (1 - \alpha) e^{-\alpha}$.
Finally, we can write: $$P(k) = \begin{cases} 1 - k & \text{if } 0 \le k \le \alpha \\ (1 - \alpha) e^{k - \alpha} & \text{if } \alpha < k \le 1 \end{cases}$$ As expected, $P$ is decreasing in $[0, \alpha]$ and increasing in $[\alpha, 1]$.
Thus the probability of winning is minimized for $k = \alpha = 2 - \sqrt 2 \approx 0.586$.