In bayesian networks we often represent conditional independence assumptions within the network like the following:
$$ (eq.1)P(X|A, B, C) = P(X|A) $$
This is assuming that B, C are conditionally independent to X given A. In a bayesian network, I'd imagine the simplest representation of this as being a Y-shaped graph where B, C are the two top nodes, pointing to the center node A, which then has one descendent X.
However, a network of this structure also makes other conditional independence assumptions, such as
$$ (eq.2)P(X|A, B) = P(X|A)\\ (eq.3)P(X|A, C) = P(X|A) $$
My question is: Are (eq.2) and (eq.3) implied by (eq.1)? I cannot seem to prove this implication. If not, why should we use (eq.1) to represent the independence assumptions in the bayesian network, given that it does not exhaustively capture all the independencies?