Let $X_1, \cdots, X_n$ be a random sample from a normal distribution with unknown mean $\mu$ and unknown variance $\sigma^2$. Let $L, U$ be real numbers such that $L < U$. Let $Y_1, \cdots, Y_n$ be "quantized" versions of $X_1, \cdots, X_n$, defined as $$ Y_i = \left\{ \begin{array}{cc} -1 & X_i < L \\ 0 & L \leq X_i \leq U \\ 1 & X_i > U \end{array} \right. $$
How can we determine the maximum likelihood estimators for $\mu$ and $\sigma^2$ based on $Y_1, \cdots, Y_n$ (instead of $X_1, \cdots, X_n$).
I don't really know how to start this question. I was thinking that the invariance principle for the maximum likelihood estimator can be somehow use here, and I am not sure to what function to apply it.
Let's focus on $\hat{\mu}=\frac{X_1+X_2+...+X_n}{n}$
the rv $Y_i$ is $\pm1$ if the corresponding $X_i$ is higher than an Upper value or smaller than a Lower value of X. (zero does not affect the calculation)
So I think that is enough to express Y in function of X, say
$$\hat{\mu}=\frac{1}{n}\Bigg[\sum_{i=1}^{n} \mathbb{1}_{\{X_i>U\}}-\sum_{i=1}^{n}\mathbb{1}_{\{X_i<L\}}\Bigg]$$
thus the numerator will be a sum of 1,1,1,-1,-1....accordingly
Similar brainstorming for the other estimator