I'm interesting for the understanding of the definition "statistical experiment". Formally statistical experiment, statistical model or just experiment was defined as a triple $$\mathscr{P}=(\Omega,\mathscr{F}, \{ P_{\theta}:\theta \in \Theta \}),$$
where $\Omega,\mathscr{F}$ is a sample space, $\Theta$ is a parameter spase and $\{ P_{\theta}:\theta \in \Theta \}$ is family of probability measures, defined on the given sample space.
My question is: why it is a family of probability measures? What is/are an example(s) of this abstaction?
Especially we could interesting for the parameters estimation. Let me give an example as the (real-valued) mean estimation problem of the random variables, that are normally distributed with known variance equals one. More precise: we have some i.i.d. r.v.'s $X_1,X_2,...,X_n$ (given observations) with $X_1 \sim \mathscr{N}(\theta, 1)$. Then we could formulate related statistical experiment as $$\mathscr{P}=(\Bbb{R}^n,\mathscr{B}(\Bbb{R}^n), \{ P_{\theta}^{\times{n}}:\theta \in \Theta=\Bbb{R} \}).$$
So, we define our statistical experiment as the infinite set of possible distributions (i.e. as a set $\{\mathscr{N}(\theta, 1)^{\times{n}}:\theta \in \Bbb{R}\}$). Why do we need a set? Why we can not say, that we are looking for some estimation of the "true" distribution, which is unique?