Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability measure space and $X: \Omega \rightarrow \mathbb{R}$ be a real random variable (measurable function) ($\mathbb{R}$ with the measureable sets being the borel sets $\mathcal{B}$). Then $X$ induces a probability measure $\mu_X$ via: $\forall B \in \mathcal{B}: \ \mu_X(B)=\mathbb{P}(\{\omega \in \Omega:X(\omega) \in B\})$.
Now a statistical model apparently is this “set of measures”:
$$ \mathcal{P}=\{p_{\theta} : \theta \in \Theta \} $$
Where each $p_{\theta}$ is a probability measure on $(\mathbb{R}, \mathcal{B})$. Am I correct thus far?
My issue now comes with the definition of estimators which my script defines as a function $Q: \mathcal{P} \rightarrow \Gamma \subseteq \mathbb{R}$, which can also be understood as $g: \Theta \rightarrow \Gamma \subseteq \mathbb{R}$, via $Q(p_{\theta})=q(\theta)$.
But when I look at examples there is stuff like:
Let $X_1, \dots X_n$ i.i.d. $\sim N(\mu, \sigma^2)$. (This for me means that the $X_i$ are measurable functions and the measure on the codomain is given by the integral of the normal distribution over the respective set). Now we introduce $Q(X_1, \dots, X_n):=\frac{1}{n}\sum_{i=1}^n X_i$.
I’ve of course seen this example in many books and I understand what we are doing intuitively. But in light of the definition above this $Q$ is not well defined, is it? For $X_1, \dots X_n \notin \mathcal{P}$ and what it gets mapped to $\frac{1}{n}\sum_{i=1}^n X_i \notin \Gamma$ this is a linear combination of random variables, i.e. again a measurable function, not a real number?? That’s why I never understand how these examples fit with the definitions, what am I getting wrong?
I’d be very glad if someone could give me an overview of how to understand these estimators and how they are used in examples formally.