The MLE $\tilde{\theta}_n$ of a sample of random variables $X_i$ from parametric model $\{f(x,\theta): x\in\mathbb{R}, \theta \in \Theta\}$ is called consistent if $\tilde{\theta}_n$ converges in probability to $\theta_0$, i.e. $\tilde{\theta}_n$ $\xrightarrow{P}$ $\theta_0$, whenever $X_i$ are generated from $ f(x,\theta_0)$.
My question is regarding the definition of convergence in probability of estimators.
I know that the MLE $\tilde{\theta}_n$ is itself a random variable (i.e. a measurable function). So, does $\tilde{\theta}_n$ $\xrightarrow{P}$ $\theta_0$ mean that $\tilde{\theta}_n$ converges in probability to $\theta_0$ when viewed as a measurable function? That is, does convergence in probability of an estimator mean the following:
$$ \tilde{\theta}_n \xrightarrow{P} \theta_0 \Leftrightarrow \mathbb{P}(\{x \in \mathbb{R}: |\tilde{\theta}_n(x)-\theta_0|\ge \epsilon \})\rightarrow 0 $$
The confusion comes from the fact that everywhere I read about the consistency of the MLE, they seem to treat the sequence ($\tilde{\theta}_n$) like a sequence of real numbers, but then convergence in probability $\tilde{\theta}_n$ $\xrightarrow{P}$ $\theta_0$, only makes sense when talking about measurable functions.
Can anyone please help me clarify the above. Thank you very much.
I think statisticians think of $(\tilde{\theta}_n)$ as a sequence of random variables, for which convergence in probability is well-defined. Random variables themselves can be viewed as measurable functions from a probability space to the real line.
Formally, one should write $P(\{\omega \in \Omega : |\tilde{\theta}_n(\omega) - \theta_0| > \epsilon\}) \to 0$ where $\Omega$ is the underlying probability space, but people often just write $P(|\tilde{\theta}_n - \theta_0| > \epsilon) \to 0$ to mean the same thing.