I know that a finite hypothesis class $\mathcal{H}$ is PAC-learnable. Let's say I take a binary classification with a finite set of input : $\mathcal{X}$ finite and $\mathcal{Y}=\{0;1\}$.
Then the set of all functions from $\mathcal{X}$ to $\mathcal{Y}$ is also finite and thus is PAC-learnable.
However, the No-free-lunch theorem states that, for $m$ samples, if $|\mathcal{X}|=2m$ then we have a distribution $\mathcal{D}$ such that $P(L_{D}(\hat{h}) \geq L_{D}(h^*) + \frac{1}{8} ) \geq \frac{1}{7} $
Isn't this a contradiction?