I am taking the Stanford Coursera course on Game Theory. Teacher explained the concept of maxmin strategy and tries to give an example with the following game.
Consider a goalkeeper and kicker game where actions for both players are Left and Right:
kicker\goalie | L | R
-----------------------------|---------------
L | 0.6, 0.4 | 0.8, 0.2
-----------------------------|---------------
R | 0.9, 0.1 | 0.7, 0.3
The game is explained in this lecture at 9:08.
By substituting $s_i\left(R\right)=1-s_i\left(L\right)$, he reduce the max-min strategy to
$$ \max_{s_1}\min_{s_2}\left( 0.2-s_1\left(L\right)\cdot 0.4 \right) \cdot s_2\left(L\right)+0.7 +s_1\left(L\right)\cdot 0.1$$
Basically he is taking the first derivative with respect to $s_2(L)$ and setting it to $0$, to find the minimum:
$$\left( 0.2-s_1\left(L\right)\cdot 0.4 \right) =0$$
and then it solves for $s_1\left(L\right)$:
$$s_1\left(L\right) = 0.5, s_1\left(R\right)=0.5$$
First of all, since it is a first-degree equation, isn't it wrong to use the derivative to find the minimum?
Secondly, he wants to find minimize over $s_2\left(L\right)$: why does he end up with a minimizing value for $s_1(L)$?
Furthermore, should not the notation for the max-min strategy be $$\arg\max_{s_1}\min_{s_2}$$?
Maybe I am completely wrong, but can someone explain his reasoning? The video is totally unclear to me.
This slides explain it better. When the taking the derivative there are three possible cases: