Consider a family $S$ of probability density functions on $X$ which is defined as $p:X\to\mathbb{R}$ such that $p(x\geq 0)$ and $\int_{X}p(x)dx=1$.Suppose each element of $S$ may be parameterized using real-valued variable $[\xi^1,....,\xi^n]$ that is
$S=\{p(x;\xi):\xi=[\xi^1,....,\xi^n]\in E\subseteq \mathbb{R^n}\}$ where $x\in X$ and the mapping $\xi\mapsto p(x;\xi)$ is injective.
My question is why do we need $\xi\mapsto p(x;\xi)$ to be injective??
I agree with the other answer but want to add that we should be more clear about why this is a necessary assumption that is often made in statistics. The usual definitions are:
A statistical model $\mathcal{P}$ is a collection of probability measures on the sample space $(\mathcal{X}, \mathcal{B})$. The collection of all probability measures is termed the full model, or the full nonparametric model.
A model $\mathcal{P}$ is parameterized with parameter space $\Theta$ if there exists a surjective map $\Theta \to \mathcal{P}: \theta \mapsto P_\theta$, called the parameterization of $\mathcal{P}$.
A parameterization of a statistical model $\mathcal{P}$ is identifiable if the parameterization is injective.
Injectivity means that no two different parameter values give rise to the same distribution.
So the point here is that injectivity is a requirement for identifiability, which is a basic assumption of any statistical model.
The wiki page is a good place to read more about this concept.