So, I came across this excerpt of some notes.
Measurement Error Model:
$X_i = µ + E_i, i = 1, 2, . . . , n$
µ is constant parameter (e.g., real-valued, positive)
$\epsilon_1, \epsilon_2, . . . , \epsilon_n$ i.i.d. with distribution function $G(·)$
(G does not depend on µ.)
⇒ $X_1, . . . , X_n$ i.i.d. with distribution function F(x) = G(x − µ). $P = \{(µ, G) : µ ∈ R, G ∈ G \}$ where G is . . .
$X_1, . . . , X_n$ i.i.d. with distribution function $F(x) = G(x − > µ)$.
$P = \{(µ, G) : µ ∈ R, G ∈ G\}$ where $\mathcal{G} $ is . . .
I'm trying to understand the idea behind having $F(x)=G(x-µ)$.
Since we know that the error is distributed as G, why does G evaluated at a certain x-µ gives me the CDF of x ?
My understanding of the whole thing: It is because x-µ feels like a deviation from the mean and while computing that number using the error CDF we will be getting the probability of observing that said value x
Just refer to the definition of CDF and the definition of $X_i$.
$$F(x) = P(X_i \le x) = P(\mu + E_i \le x) = P(E_i \le x-\mu) = G(x-\mu).$$