In many instances I've come across (in Game Theory, etc), when trying to choose an optimal strategy it has the criterion that it wants to maximize expected value much of the time. To simplify this question, imagine we are playing a game, $X$, (BlackJack, Slots, Rock-Paper-Scissors) that can be repeated many times if desired. I see that maximizing expected value is a good idea in situations where we can apply Law of Large Numbers (e.g. repeating game many times) and know that value of wins will over $n$ iterations of game will somewhere close to $nE(X)$.
But I have a problem accepting this as a reasonable criteria, when the Law of Large Numbers can't be applied. Lets say we play game $X$ one time or $n$ times where this many times where Law of Large Numbers doesn't reasonable apply. In this situation our actual winnings could vary quite a bit from $nE(X)$. So I don't see how maximizing expected value would still be the best criterion here. Wouldn't other criterions that possible mix variance and expected value or perhaps one that looks at which strategy produces "best" 95% confidence interval of value of winnings be a better criterion?
NOTE: assume I am someone who is risk neutral and utility of winnings is equal to monetary value of winnings
Here's an example of a two-player game which I think bears out your point. Suppose I have a hundred boxes and a hundred coins, and that I am free to distribute the coins as I wish. My partner and I then randomly pick two boxes, and I obtain the difference in coins between the two numbers. What should my strategy be?
If we want to maximize expected value, we may reason as such. Let me start with some distribution of coins and imagine moving one coin from a box with fewer coins to one with more. How does this impact my return?
There are then three outcomes:
So it's always to my advantage to put more coins in fewer boxes. Proceeding this way, we conclude that the best return occurs when I put all 100 coins in one box. In that case, I win all 100 coins whenever one of us chooses the 100 coin box, and win nothing otherwise.
So, from the perspective of my expected value, if I'm playing a long time I expect to make the most profit on this game by relying on a jackpot. But this is hardly a stable rate of return: the probability of winning this way is a mere 1/50 (99 ways to choose one of the empties along with the jackpot out of $_{100} C_2=99*50$ outcomes total). On most runs I win nothing.
So if I was only playing for a few games (or if there was, say, a 1-coin buy-in per round) then it might be very reasonable for me to pick a distribution which has less variance in value. (If anyone knows which distribution minimizes, I'd be interested to hear about it...) Thus the 'maximize expected value' principle isn't necessarily the most 'rational' course in every scenario.