Suppose that someone is going to bet in a game. A dice is rolled, and there are only these two options for betting:
Option 1. Give 1 dollar and bet on 6.
Option 2. Give 1 dollar and bet on 1, 2, 3, 4, 5.
It's obvious that if he/she is going to play this game hundreds of times then it's rational to choose the second option, because in long runs the expected value tells how much will he/she win, approximately. Put in a more mathematical way, the rationality of this choice is shown be the "law of large numbers".
However I don't any reason to act as above, if the game is played just once! Can you justify the rationality of choosing the first option?
I know that this question may seem more philosophical than mathematical.
Thanks.
In general optimal betting is usually studied using a utility function. If $S$ is your set of possible payouts (in this case, $S = \{ 0, 1 \}$ for an unlucky bet and for a lucky bet), then a utility function is a mapping $U:S \rightarrow \mathbb R$ that measures your satisfaction with each result.
Rational behaviour is, I guess, often considered to indicate having a linear utility function, so $U(0) = 0$ and $U(1)= 1$. Now if you bet according to Option 1, then $\mathbb E U = \frac 1 6 \times 1 + \frac 5 6 \times 0 = \frac 1 6$, and if you bet according to Option 2, then $\mathbb E U = \frac 1 6 \times 0 + \frac 5 6 \times 1 = \frac 5 6$. So the expected utility of Option 2 is higher.
Other utility functions are very well possible, and often in economic theory, a monotonically increasing, concave utility function is chosen, representing risk aversion. However, in this simple case, any monotonically increasing utility function will have the expected utility function maximized for Option 2. If $p$ is your chance of payoff equal to 1, then $\mathbb E^p U = p U(1) + (1-p) U(0) = U(0) + p (U(1) - U(0))$, which is increasing in $p$.
This may give you a bit of context for this kind of question.