I'm reading about error correction for the significance level when doing multiple tests. However, I don't get the sense behind this.
Let's make an example. I have a cohort of $10k$ people, about $4000$ of them have a specific disease (group $A$), the rest does not have this specific disease (group $B$).
Now I have about $100$ other comobidities, which I would like to test whether their appearance is significantly larger in one of my groups. So I have like $$ \text{test1: disease1, group A, group B} \\ \text{test2: disease2, group A, group B }\\ ... \\ \text{test100: disease100, group A, groupB} $$
No I've read that to be correct, we would need to divide the significance level of let's say $\alpha = 0.05$ by $100$, as we make $100$ tests. Can someone explain why this is required? I mean I could like test $50$ of the diseases this week and $50$ the next week and this would give me a different result then.
You're describing the Bonferroni correction, which is designed to ensure that the chance of a single Type I error is at most $\alpha$. This xkcd comic helps to explain why a correction is needed; if you don't add a correction, some of your null hypotheses will get rejected purely by chance, and you don't want to misinterpret that as a meaningful result.
As you say, it's not perfect; if you treat this as 100 small experiments rather than 1 big one, you could excuse yourself from making the correction, and get lots of false positives. In principle, scientists are supposed to not do that, but in practice it can be a big problem.