Let say I have criteria $1, 2, 3,$ and $4$. I would like the Dummy variable to be 1 only if a certain minimum amount of criteria are met. For example, if $3$ of the $4$ are true, then Dummy$ =1$. If $4$ of the $4$ are true, then Dummy$ = 1$. But if only $1$ or $2$ are true Dummy$ = 0$.
What would be the theory / practical / research to refer to that decide how much of this criteria is enough to meet for the dummy to be $1$? Possible readings that you have seen this applied in would also be welcome.
The dummy application is in a regression model.
As it was answered in the comments, there are no general criteria for how "large" the set (event) should be to construct a valid dummy variable, as this is context-dependent and not a mathematical question (mathematically, you can define an indicator function on the empty set $\emptyset$). However, from a practical perspective, this is an important question since you want to obtain a valid and stable coefficient estimator corresponding to your dummy variable. Therefore, when defining a dummy variable, you need to check that enough observations meet your criteria (whatever your criteria are). How much is enough? Well, it depends on the statistical power and (in)stability that you like to achieve (or tolerate). This is a statistical question where, under certain assumptions, you can compute the minimal sample size you need to meet your statistical criteria (e.g., minimal test power).