In the book "Understanding machine learning", there is Theorem 6.11 with the following statement
Let ${\cal H}$ be a class and let $\tau_{\cal H}$ be its growth function. Then, for every $\cal D$ and every $\delta \in (0, 1)$, with probability at least $1-\delta$ over the choice $S \sim {\cal D}^m$ we have $$ |L_{\cal D}(h) - L_S(h)| \leq \frac{4 + \sqrt{\log{\tau_{\cal H}(2m)}}}{\delta \sqrt{2m}}. $$
The theorem shows the uniform convergence property, though one can see that the proof utilizes the fact that the considered loss function is 0-1 loss. Where to read the similar proofs for other loss functions (say, for hinge loss)?