In the book, Prediction, Learning, and Games, chapter 3, section 7(General lower bound), page 59. theorem 3.6 compute a lower bound for min-max regret. min-max regret is: $$V_{n}^{(N)}=\inf _{P} \sup _{\{\mathcal{F}:|\mathcal{F}|=N\}} \sup _{y^{n} \in \mathcal{Y}^{n}} \max _{i=1, \ldots, N} \sum_{t=1}\left(\ell\left(\widehat{p}_{t}, y_{t}\right)-\ell\left(f_{i, t}, y_{t}\right)\right)$$ such that $p$ is prediction of algorithm and $f_{i,t} \in \mathcal{F}$, $N$ is number of expert and $n$ is Number of steps for implementing the algorithm. In this theorem like to compute $$V_{n\left\lfloor\log _{2} N\right\rfloor}^{(N)} \geq\left\lfloor\log _{2} N\right\rfloor V_{n}^{(2)}$$
What is the idea behind proof of this theorem? and why do we consider the expert advice as a sequence of zeros and ones?