I am reading Duda's Pattern classification (2nd ed)
In page 38 ( Chapter 3: Maximum likelihood and bayesian estimation), I do not know how they obtain the following.
$$P(X\mid e^p) = P(x\mid e_{\mathcal{P}1},e_{\mathcal{P}2},\ldots,e_{\mathcal{P}|\mathcal{P}|}) \tag A$$
$$ \sum_{\text{all } i,j,\ldots, k} P(x\mid\mathcal{P}_{1i},\mathcal{P}_{2j},\ldots,P_{|\mathcal{P}|k})\ P(\mathcal{P}_{1i},\mathcal{P}_{2j},\ldots,\mathcal{P}_{|\mathcal{P}|k}\mid e_{\mathcal{P}1},\ldots,e_{\mathcal{P}|\mathcal{P}|}) \tag B$$
$$ \sum_{\text{all } i,j,\ldots\,,\, k} P(x\mid\mathcal{P}_{1i},\mathcal{P}_{2j},\ldots,P_{|\mathcal{P}|k})\ P(\mathcal{P}_{1i}\mid e_{\mathcal{P}1})\ P(\mathcal{P}_{|\mathcal{P}|k}\mid e_{\mathcal{P}|\mathcal{P}|k}) \tag C$$
I am unable to comprehend on how or what rule the author used to derived from Eq A to Eq B and subsequently Eq C.
I really appreciate if some one can provide good reference or hint to this problem