Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space.
This (somewhat philosophical) question concerns the sample space, $\Omega$, or rather the outcomes $\omega \in \Omega$. Commonly, the $\omega$ are introduced as an outcomes of "experiments," but this is of course rather vague. Other authors talk about "executions of the probability model," but this has the same flaw as the previous description.
If we were to say, for example, that the $\omega$ result from uniformly random draws (with replacement) from $\Omega$, then we seemingly have a circular argument in that we are somehow trying to define probability theory in terms of probability theory.
Insofar as I can tell, philosophical arguments seem to center more on the interpretation of $\mathbb{P}$ as opposed to the so called "bearers of probability," $\omega \in \Omega$ (see e.g., https://plato.stanford.edu/entries/probability-interpret/).
So in conclusion, is there a standard accepted theory underlying the generation of outcomes $\omega \in \Omega$? If not, does this not pose problems around the existence of probability spaces in the first place?
Since it is a mathematical structure, all mathematicians care about is whether the logic is sound and self-consistent. We state that there is a set, $\Omega$, and subsets $\omega$ of it. Those are definitions, so cannot be wrong so long as they are not contradictory.
It sounds as if you are interested in the modeling stage, where these abstract sets are associated with events, called outcomes of a random variable. If the connection seems vague, describe a specific situation, such as coin flips. Essentially we are making a map between outcomes and observations in the real world. We are choosing to physically constrain a system so that we can anticipate all possible observations, and then observing to find out which one actually happens in each trial of the experiment.