I learn the definition of "identifiability of statistical model" as follows; Let W be a parameter space. If for $w \in W$, a map which maps $w$ to $p(|w)$ is one-to-one, it is called identifiable.
My question is the reason why this condition is meaningful.
In my opinion, for non-identifiable model, if we try to consider the quotient space W/~, where a~b means p(|a)=p(|b), that space need not to be a manifold, thus it is difficult to treat. Due to this fact, we want to identifiable model...? Is my quess correct? Is there anyone who is familar to this area- statistical learning theory ? Any advice would be helpful for me, thanks!
Consider a model with a Gaussian distribution family $N(a^2, 1)$, indexed by the scalar parameter $a \in \mathbb{R}$.
The parameter $a$ is not identified because two distinct values of $a$, say 2 and -2, lead to the same distribution $N(4, 1)$.
If your main aim is to pin down the "true" value of the parameter, identifiability matters. In the aforementioned model, unless the true value of $a$ is zero, you cannot pin down the true value of this parameter, no matter how much information you obtain from observations coming from this model. Putting it differently, if you have infinite number of independent observations from this model, say $\{X_1, \ldots, X_n\}, n\rightarrow \infty$, $a^2$ is almost surely equal to $\bar{X} = \frac{\Sigma_{i=1}^{n} {X_i}}{n}, n\rightarrow \infty$. However the value of $a$ itself can be either $+\sqrt{\bar{X}}$ or $-\sqrt{\bar{X}}$, hence impossibility to pin down the true value of the parameter of interest.
Note that although $a$ is not identified, a function of it, i.e. $a^2$, is identified.