This is a simple two-player game. One if the people is picked to 'bat'.
Both players simultaneous choose a number from 1 to 6. (When playing against a person, you use your hands to show the number). If both players show the same number, the person 'batting' is out. Otherwise, the person batting gets points (runs) equal to the number he picked.
This process repeats until the person batting is out. His final score is the sum of all the points he scored.
The person batting and the other player switch roles. The same process occurs. Whoever scored more points wins.
Is there a perfect strategy for this game?
When playing against a computer, you stand exactly 1/2 chance of winning. You may sometimes be able to predict humans and play better. Assume you don't know who you are playing? Is the perfect strategy just to always pick random numbers, or to predict the opponent's future moves based on his current ones?
First a word on the concept of a "perfect strategy". This is a math site, so if you ask about games here, people (including me, below) will give you mathy answers based on mathematical game theory. Don't listen to them. If you know your friend's lucky number, you may well be better off trying to make use of that than assuming that your friend is the hypothetical hyperrational creature that game theory presupposes.
That having been said, the game-theoretical concept that approximates that of a "perfect strategy", namely a Nash-equilibrium pair of strategies, does have practical value. It doesn't try to make use of your opponent's psychological weaknesses but it also doesn't allow your opponent to make use of yours. It's defined as a pair of strategies such that neither player can gain from switching strategies.
It turns out that your game is a very nice example to apply the theory of Nash equilibria to, and we can go a longer way towards "solving" it than one might think at first sight. Some of the results are surprising – for instance, it turns out that the players play quite asymmetric roles and the strategy of one of them is much closer to a uniform distribution over the numbers than that of the other.
For definiteness, I'll assume that the player who bats first wins in case of a tie in the scores.
We'd like to treat each of the "throws" of two numbers being chosen as a separate game, but we don't initially know the payoffs for these, since they depend on later throws. In such a situation, backward induction is useful, that is, reasoning backwards from the end of the game where the payoffs are known.
So let's consider the second "innings", in which the second player bats. When she misses (i.e. the numbers are equal), she loses, with payoff $0$. If she reaches more runs than her opponent, she wins, with payoff $1$. If she scores runs but doesn't win yet, the payoff is the expected value for the new score difference, some number between $0$ and $1$. Let's call the expected payoff when she scores $i$ runs $v_i$. Then the payoff matrix is
$$ \pmatrix{ 0&v_1&v_1&v_1&v_1&v_1\\ v_2&0&v_2&v_2&v_2&v_2\\ v_3&v_3&0&v_3&v_3&v_3\\ v_4&v_4&v_4&0&v_4&v_4\\ v_5&v_5&v_5&v_5&0&v_5\\ v_6&v_6&v_6&v_6&v_6&0\\ }\;. $$
Let $p_i$ be the probability with which the batter chooses $i$, and $q_j$ the probability with which the bowler chooses $j$. Then the expected payoff is
$$ V=\sum_i\sum_{j\neq i}p_iq_jv_i=\sum_ip_iv_i\sum_{j\neq i}q_j=\sum_ip_iv_i(1-q_i)\;. $$
At equilibrium, all numbers have non-zero probabilities: If $q_k$ were zero, the batter could steal runs for free by choosing $k$, and if $p_k$ were zero, the bowler would benefit from switching to a strategy with $q_k$ zero. Thus, the derivative of the expected payoff with respect to all $p_i$ and $q_j$ must vanish, where we need to include Lagrange multiplier terms $\lambda\sum_i p_i$ and $\mu\sum_j q_j$ in the payoff to account for the normalization conditions $\sum_ip_i=1$ and $\sum_jq_j=1$, respectively. Then
$$v_i(1-q_i)=\lambda$$
and
$$v_ip_i=\mu\;.$$
Solving for $p_i$ and $q_i$ and using the normalization conditions yields
$$\frac\lambda5=\mu=\left(\sum_jv_j^{-1}\right)^{-1}\;,$$
$$\frac{1-q_i}5=p_i=v_i^{-1}\left(\sum_jv_j^{-1}\right)^{-1}\;.$$
Substituting into the expected payoff gives
$$ V=\sum_ip_iv_i(1-q_i)=5\left(\sum_jv_j^{-1}\right)^{-1}\;. $$
If you're like me, all those reciprocals are beginning to get on your nerves, so let's get rid of them by considering the reciprocals $R=1/V$ and $r_i=1/v_i$ of the payoffs. That makes things look a whole lot nicer:
$$ \frac{1-q_i}5=p_i=\frac{r_i}{\sum_jr_j}\;, $$
$$ R=\frac{\sum_jr_j}5\;. $$
So the solution is actually surprisingly simple: the batter should choose probabilities proportional to the reciprocal payoffs, the bowler should choose "avoidances" proportional to the reciprocal payoffs, and the reciprocal expected payoff is just a slightly scaled average of the reciprocal payoffs (scaled because the average would have a $6$ where the result has a $5$).
Since this is all so nicely linear now, we can actually solve the second innings analytically. Assume we know the reciprocal expected payoffs for score differences of $k-1,\ldots,k-6$; denote them by $R_{k-1},\ldots,R_{k-6}$. Then $R_{k-i}$ is the payoff $v_i$ in the game at score difference $k$. We can calculate $R_k$ and form a new vector of six values $R_k,\ldots,R_{k-5}$, and since everything turned out to be linear (in the reciprocals), we can write this as a matrix multiplication:
$$ \pmatrix{R_k\\R_{k-1}\\R_{k-2}\\R_{k-3}\\R_{k-4}\\R_{k-5}}= \pmatrix{ 1/5&1/5&1/5&1/5&1/5&1/5\\ 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\\ } \pmatrix{R_{k-1}\\R_{k-2}\\R_{k-3}\\R_{k-4}\\R_{k-5}\\R_{k-6}}\;, $$
which is a recurrence relation of the form $\vec x_k=A\vec x_{k-1}$, with solution $\vec x_k=A^k\vec x_0$ (where the initial value $\vec x_0$ is a vector of all $1$s representing the wins from hitting any number at score difference $0$). We could use the spectral decomposition of $A$ to write $\vec x_k$, and hence $R_k$, and hence $V$ and the $p_i$ and $q_i$ at every step, in closed form.
Unfortunately the eigenvalues of $A$ turn out not to have analytical forms. Sage finds that the largest (and the only one with magnitude $\ge1$) is $\lambda_{\mathrm{max}}\approx1.054717925023537$, and its minimal polynomial is the characteristic polynomial of $A$, $x^6-\sum_jx^j/5$. For large $k$ this eigenvalue dominates, and the $R_k$ approximately form a geometric sequence, $R_k\approx\lambda_{\mathrm{max}}^k$. Thus, in the long run, the expected payoff drops by a factor of $\lambda_{\mathrm{max}}$ for every point of score difference.
Since the eigensystem of the matrix isn't nice, I didn't bother to decompose the initial vector of $1$s into its eigencomponents to explicate the analytical solution; instead I wrote a Java program to calculate the (rational) probabilities and payoffs.
These are the $p_i$ at score differences up to $25$ ($i=1$ brown at the top; $i=6$ red at the bottom):
These are the $q_i$ at score differences up to $25$ ($i=1$ again brown, now at the bottom; $i=6$ again red, now at the top):
And this is a logarithmic plot of the expected payoffs at score differences up to $25$, with a line of slope $-\log\lambda_{\mathrm{max}}$ fitted to the last $10$ of them:
Note that the limiting probabilities (with $1-q_i^\infty=5p_i^\infty\propto \lambda_{\mathrm{max}}^{-i}$) are quite similar to the ones that Micah got by optimizing the expected number of runs; they're approximately given by
$$ \begin{array}{c|c|c} i&p_i^\infty&q_i^\infty\\ \hline 1&0.18962&0.05188\\ 2&0.17979&0.10107\\ 3&0.17046&0.14770\\ 4&0.16162&0.19192\\ 5&0.15323&0.23384\\ 6&0.14528&0.27359 \end{array} $$
Unfortunately, the first innings is considerably harder to treat, for two reasons: There's no starting point to begin at and work backwards from, and we no longer have $0$s on the diagonal of the payoff matrix, since the expected payoff in case of a miss is minus the expected payoff of the second innings, with score difference the score achieved so far. This second problem we can solve by subtracting this "losing payoff" from all payoffs to get back to $0$s on the diagonal. This constant additive shift doesn't change the strategies, but it does destroy the linearity of the reciprocal payoffs.
To at least get an accurate computational solution, let's determine the asymptotic behaviour for large games. Then we can start with a good approximation at some unrealistically high score and work our way backwards. A moment's reflection shows that the expected payoffs must have the same long-term decay rate as the ones in the second innings and hence must asymptotically be given by $W_k=-cV_k$, where $k$ is the score and $c\gt0$ is yet to be determined. The payoff for missing at $k$ is $-V_k$ (since this starts the second innings with a score difference of $k$), and the payoff for scoring $i$ runs is $W_{k+i}$. Thus, subtracting $-V_k$ from all payoffs and adding it back in the expected payoff, we obtain
$$ \frac1{W_k+V_k}=\frac15\sum_i\frac1{W_{k+i}+V_k}\;, $$
and thus
$$ \frac1{1-c}=\frac15\sum_i\frac1{1-c\lambda_{\mathrm{max}}^{-i}}\;, $$
Sage tells me that this $6$-th order equation for $c$ has $6$ real solutions, $5$ of which are $\gt1$, whereas we want the other one, $c_{\mathrm min}\approx0.554509471152537$. Its minimal polynomial is of degree $36$ ($6$ from this equation and $6$ from the one for $\lambda_{\mathrm{max}}$):
$$ \left(4449462890625000000x^{36} - 24172330078125000000x^{35} + 49046693990625000000x^{34} - 41769983095650000000x^{33} + 6262303815585000000x^{32} + 10466684507347200000x^{31} - 4138402685549220000x^{30} - 582487240632840000x^{29} + 1047434350651164000x^{28} - 82564347696768000x^{27} - 916144313873313600x^{26} + 170063128160164800x^{25} + 266506356802489200x^{24} - 6276611448694800x^{23} - 40016162958292800x^{22} - 5235939057854880x^{21} + 2435166236985120x^{20} + 1284349230559440x^{19} - 304732164386520x^{18} - 6301109644560x^{17} - 554379054984x^{16} + 2195605890144x^{15} - 480507324552x^{14} - 143513421840x^{13} + 84252155508x^{12} - 15843100428x^{11} + 478048608x^{10} + 592215264x^9 - 152515620x^8 + 13933764x^7 + 1008678x^6 - 544548x^5 + 93798x^4 - 10020x^3 + 738x^2 - 36x + 1\right)/\,4449462890625000000\;. $$
Using this to seed the backward induction for $W_k$ at $k=94,\ldots,99$, I calculated the expected payoffs and equilibrium strategies for the first innings (also with the code linked to above, now using floating-point arithmetic and not exact rational numbers). Here are the results in the same three diagrams as for the second innings ($p_i$, $q_i$, $-W_k$ plotted logarithmically):
The deviations from the limiting values are much less pronounced; the deviations in the second innings are basically smoothed out by being "convolved" with the distribution of runs scored in the first innings.
The limiting values are quite similar but not identical to the ones in the second innings; they're now given by $1-q_i^\infty=5p_i^\infty\propto\left(1-c\lambda_{\mathrm{max}}^{-i}\right)^{-1}$:
$$ \begin{array}{c|c|c} i&p_i^\infty&q_i^\infty\\ \hline 1&0.18787&0.06066\\ 2&0.17765&0.11174\\ 3&0.16894&0.15530\\ 4&0.16144&0.19282\\ 5&0.15491&0.22545\\ 6&0.14919&0.25403 \end{array} $$
The overall value of the game for the second batter comes out as $-W_0\approx0.47528$, so there's a slight advantage for the first batter – which isn't terribly surprising, since I resolved ties in her favour. If the ties are resolved in the second batter's favour instead, the value of the game for the second batter is approx. $0.52004$, now with a similar but slightly smaller advantage for the second batter. If a tie counts as half a win for each player (payoff $0.5$), then the value of the game for the second batter is approx $0.49819$, which is much closer to $0.5$ than I would have expected, given the asymmetrical setup of the game. It turns out that the asymmetry very slightly favours the first batter.
P.S.: I computed some statistics: The expected number of throws ($n_{\mathrm t}$) and of runs ($n_{\mathrm r}$) in the entire game, in each innings and after a point in the asymptotic tail of each innings.
With $\rho_i=\lambda_{\mathrm{max}}^{-i}$ and $\sigma_i=\left(1-c\lambda_{\mathrm{max}}^{-i}\right)^{-1}$, the asymptotic values are calculated as
$$ \begin{eqnarray} \left<n_t\right>=1+\left<n_t\right>\sum_ip_i\left(1-q_i\right)\quad&\longrightarrow&\quad\left<n_t\right>=\left(1-5\frac{\sum_i\rho_i^2}{\left(\sum_i\rho_i\right)^2}\right)^{-1}\simeq6.25852\;,\\ \left<n_r\right>=\sum_ip_i\left(1-q_i\right)\left(i+\left<n_r\right>\right)\quad&\longrightarrow&\quad\left<n_r\right>=\frac{5\sum_i\rho_i^2i}{\left(\sum_i\rho_i\right)^2-5\sum_i\rho_i^2}\simeq16.78199 \end{eqnarray} $$
for the second innings and
$$ \begin{eqnarray} \left<n_t\right>=1+\left<n_t\right>\sum_ip_i\left(1-q_i\right)\quad&\longrightarrow&\quad\left<n_t\right>=\left(1-5\frac{\sum_i\sigma_i^2}{\left(\sum_i\sigma_i\right)^2}\right)^{-1}\simeq6.19442\;,\\ \left<n_r\right>=\sum_ip_i\left(1-q_i\right)\left(i+\left<n_r\right>\right)\quad&\longrightarrow&\quad\left<n_r\right>=\frac{5\sum_i\sigma_i^2i}{\left(\sum_i\sigma_i\right)^2-5\sum_i\sigma_i^2}\simeq16.78137 \end{eqnarray} $$
for the first innings. The remaining statistics I collected from the code (using the asymptotic values as seeds in the first innings):
$$ \begin{array}{c|ccc} &\text{game}&\text{1$^{\text{st}}$ innings}&\text{2$^{\text{nd}}$ innings}\\ \hline n_t&9.47542&6.19873&3.27669\\ n_r&25.64678&16.78067&8.86611 \end{array} $$
(The smaller numbers for the second innings aren't because of a worse performance of the second batter but because the innings ends when the (possibly low) score of the first batter is reached.)
It's striking how relatively independent the numbers are of the specific circumstances. For instance, the expected number of runs in the first innings, in the asymptotic tail of the first innings and in the asymptotic tail of the second innings are all $16.78$ up to two digits, and this is also, up to two digits, the value Micah calculated by optimizing the expected number of runs.