I am developing a game project, where I have to make a lot of decisions in the form: "Place object X?". I want to say yes in Y% of the events, where I specify Y as a constant. I am using a random generator that guarantees uniform distribution on its output to make the decisions. I generally request a random value between [1,100] to get a human readable "percentage" value. Then, I apply a condition in the form: "if percentage <= limit: yes, else: no" to make the decision about placing the current object. Here "limit" is the constant Y, e.g. "40" for 40%. Note that I don't want a guarantee on the ratio of yes/no decisions for all events, rather I making the decisions for each event independently. Is this approach flawed from a statistical / mathematical point-of-view?
As an alternative, I could imagine asking a [0,1] value (as a representation of no/yes) from the random generator and specifying a non-uniform distribution to the generator [0.6,0.4] following the previous example. However, I don't know if this approach is better / worse / "it depends" compared to my current approach.
I would suggest using a Bernoulli-distribution Ber$(p)$ instead. $p\in[0,1]$ is the probability of saying yes (for example). It usually is attributed with flipping a coin. If you want you could also choose $p$ at random, for example using a uniform distribution on [0,1]. Hope I understood you correctly