A common interview question (which appears at least once on this board) is the following:
You are playing a game in which a single fair 6-sided die is rolled repeatedly, up to a maximum of N rolls. You can stop the process any time you wish, at which point your score is the last value rolled. What is your expected score as a function of N, assuming optimal play?
The answer is found recursively: For N=1 the answer is 3.5. For larger N the answer for N-1 determines whether to stop after the first roll, from which we can calculate the answer for N.
Now consider just that first roll. Obviously if the first roll is a 6, you stop. Suppose the first roll is 5. Do you stop? The answer depends on N. Clearly for N=2 you stop if 5 is the first roll, and just as clearly you do not stop on an initial 5 if N is very large, because you're so likely to get a 6 eventually. How large must N be so that you will stop on the first roll only if that roll is 6? The answer is 6. That is, if the first roll is 5 you should stop if N<=5 and continue if N>=6.
Now generalize to a k-sided die: Let g(k) be the smallest N such that the optimal strategy for a k-sided die is to stop after the first roll only when that roll is k. We've seen that g(6)=6. Is this a coincidence? In fact it is: g(k)=k up to k=9, but g(10) = 11.
So consider the ratio g(k)/k. As k increases, this ratio increases (though not monotonically). Numeric calculation suggests that it approaches a limit, and that limit might be e/2. Can we prove this limit, or some other limit, or that the limit doesn't exist?
Yes there is a limit. I don't have a final double-checked answer, but my calculations suggest that it is nothing nearly so simple.
So I'll just outline the argument and let you fill it in.
Let $g(k, m)$ be the number at remaining rolls at which you'll only take the top $m$ values on a $k$-sided dice if you see them. This is a generalization of your $g$ in that it has two parameters, but they are connected by $g(k) = g(k, 1)$.
The first easy result is that $g(k, m)$ is bounded above by $O(k \log(k) / (m - 1))$. To see that, it suffices to note that the optimal strategy gets better results than the suboptimal strategy of accepting the first dice in the top $m-1$, or 0 if you don't get one. But in any $k/(m - 1)$ rolls that suboptimal strategy has probability $>0.5$ of getting a value in the top $m-1$. If we do that $\log_2(k)$ times, our probability of failing to get a value in the top $m-1$ is $< \frac{1}{k}$ and our expected results are therefore good enough that we would be better off not accepting the $m+1$'th best value but instead doing this suboptimal thing.
What does this do for us? It lets us break out a telescoping series like so then bound the last term: $$ g(k, 1) = (g(k, 1) - g(k, 2)) + (g(k, 2) - g(k, 3)) + \ldots + (g(k, m-1) - g(k,m)) + g(k, m)$$
My hasty calculation that I don't wish to type up is that $g(k, i) - g(k, i+1) = k \log(1 + 2/i) / (i+1) + O(1)$. I probably made a calculation error. But the principle of the calculation is that at $g(k, i+1)$ the expected value of continuing just reached or passed $k - i - 1$ which is why we now only stop at the $i+1$ values $k, k-1, \ldots k-i$. How many rolls on this strategy does it take so that the expected value passes $k-i$?
Note that the terms of this series are $O(1/i^2)$ which is a converging series. It converges to something. (If my calculation is right, approximately $2.118347...$) So we take the above telescoping series, expand out $\sqrt{k \log(k)}$ terms, add/subtract the tail, and we'll get $C + O(\sqrt{k \log(k)})$ which is definitely a sequence that converges to the infinite sum $C$.