I am reading Foundations of Machine Learning (1st edition). It seems that most generalization bounds in the literature are based on Rademacher complexity, rather than Gaussian complexity. So, I was wondering if the same bounds still hold for Gaussian complexity. For example, does Theorem 3.1 in the book still hold for Gaussian complexity? From the proof, it seems that it still holds for Gaussian complexity without changing anything.
I have read the paper Rademacher and Gaussian Complexities: Risk Bounds and Structural Results, and the Lemma 4 there indicates that Rademacher complexity can be upper bounded by Gaussian complexity times a constant. I was wondering, what is this constant? Is it data dependent?
It is a number not depending on the data , n or the hypothesis class . I did not see an actual value in any paper or book . But this is not very important since almost all bounds in learning theory are up to some absolute constants .