Having not done any stats for a few years, I seek clarification regarding the definition of time series given in my textbook. I apologize for the length, but I would be glad to just resolve my main concern (so feel free to ignore my secondary concern).
Background: To begin with, a stochastic process is a family of time-indexed random variables $X(\omega, t)$, where $w$ belongs to a sample space and $t$ belongs to an index set. For a given observation $\omega$, $X_{\omega}(t)$ is a function of $t$, which is called a realization of the stochastic process.
Definition: A time series is a realization from a certain stochastic process.
Main Concern: Does the definition imply that a time series is a single function (the realization of a stochastic process, corresponding to some fixed outcome $\omega$), or is a time series an assignment of time-dependent functions to each element $\omega$ in the sample space $\Omega$?
In other words, we define a realization of a stochastic process relative to some fixed element $\omega \in \Omega$. But then is a time series the collection of realizations for each $\omega \in \Omega$, or just a realization for a single outcome?
Secondary Concern: Regarding stationary time series, my textbook writes that the exact values of the mean, variance, autocorrelation, and partial autocorrelation parameters "can be calculated if the ensemble of all possible realizations is known. Otherwise, they can be estimated if multiple independent realizations are available."
The ensemble was defined as the collection of all possible realizations in a stochastic process. In light of the authors comments above, this makes me question whether the ensemble refers to the collection of realizations corresponding to each $\omega \in \Omega$ or if, rather, the ensemble consists of all possible realizations for each $\omega \in \Omega$. The way it is defined makes me suspect the latter, but then I don't see how this information would relevant towards determining the above parameters.