Suppose there is a game with two players involving probability. There are finitely many game states, and associated with each state is a set of moves available to the current player, and each move moves him to one of another finite set of states specified by some transition probability distribution. These only depend on the state and not the player making the move. So the game is a bit like a markov chain. We define a strategy for a player as a vector which specifies, for each game state, what move they will make if they are currently at that game state. There are two absorbing states $s_1, s_2$ in the game so that no matter how the moves are made, with probability $1$ either of them will be reached. $s_1$ wins for player $1$ and $s_2$ wins for player $2.$ Also the graph is connected in the sense that from every non-absorbing state there exists a sequence of moves that gets to any other state with positive probability.
The game proceeds as follows. First, player $1$ gets $K$ moves starting from some position $x.$ After that, player $2$ gets to play out the rest of the game (no more moves for player $1$). So player $1$ optimally plays a minimax strategy in this game. Let's call this game $G_K.$
Now consider the game but with player $2$ removed, ie. Player $1$ starts at $x$ and plays the entire game, choosing a strategy that maximizes his probability of winning. Let's call this game $G_{\infty}.$
Suppose we have some strategy $S$ such that for every integer $K$ the strategy $S$ is an optimal strategy for player $1$ in $G_K,$ ie. it minimizes the maximum probability of player $2$ winning after player $1$'s $K$ moves. Is it necessarily true that $S$ is also an optimal strategy for player $1$ in $G_{\infty}$ ?