Given the Markov Blanket $\mathit MB(X)$ I am told that $$P(\mathit MB(X)) = \alpha P(X \vert U_{1}, \cdots , U_{n}) \prod_{Y_{i}} P(Y_{i} \vert P(Y_{i} \vert Z_{i1} \cdots)$$
where $\alpha$ is the normalization factor and $\mathit MB(X)$ is represented by this diagram reproduced from Artificial Intelligence A Modern Approach 3rd Edtion .
Thus far I've been unable to figure out how this equation is derived. An approach I've tried is applying the exact inference equation and the definition of a joint distribution for Bayesian Networks: $$P(X\vert e) = \alpha P(X, \text{e}) = \alpha \sum_{y \in \pmb Y}P(X, \text{e}, \text{y})$$ $$P(x_{1}, \cdots ,x_{k}) = \prod_{i=1}^{k}P(x_{i} \vert \textit{parents}(X_{i}))$$ where $\pmb E$ is the is the set of evidence variables and $\text{e} \subseteq \pmb E$ are the evidence variables, $\pmb Y$ is the set of hidden variables and all the variables in the network is $\pmb X = \{X\} \cup \pmb E \cup \pmb Y$, and $\textit{parents}(X_{i})$ are the parent nodes of the node corresponding to the RV $X_{i}$ in the Bayesian Network.
The issue I'm facing is that when I expand the Markov Blanket probability, I end up with factors depending on the parent's of the $Z_{ij}$ nodes, which do not appear in the LHS of the equation I was given. Can someone provide me with the correct derivation of the RHS and give me some detail on the approach used?