I read a paper where the authors factorized a conditional probability as follows:
$P(a|b, c)\propto P(a|b)P(a|c)$.
They say that they can do that because $b$ and $c$ are causally independent (they are using graphical models), and cite the paper: "Exploiting Causal Independence in Bayesian Network Inference" to justify this. Under which assumptions can this be true? Honestly, I don't see how this statement is true. Thanks for your comments.
You haven't said what paper you are reading and without further clarification I would agree that the claim you have written is not true, however the actual construction of the casual independence model in Exploiting Causal Independence in Bayesian Network Inference requires more care in specifying the random variables. So let $B$ and $C$ be parents of the node $A$ then saying $B$ and $C$ are "casually independent" amounts to saying there are random variables $X_b$ and $X_c$, in the terminology of the paper we say $X_b$ is the "contribution of $B$ to $A$, such that
Where the notation $I(X,Y | Z)$ is taken to mean $X$ is independent of $Y$ given $Z$. Once this has been set up then one gets that $$ P(A = a | B, C) = \sum_{\alpha_1 * \alpha_2 = a}P(X_b = \alpha_1 | B)\cdot P(X_c = \alpha_2 | C). $$ So without having read the paper this is how I would interpret the statement in your question, rather than the misleading interpretation that $p(a|b,c)$ is proportional to a product in the usual sense of conditional independence. Basically representing causal independence requires additional structure than the basic conditional independence DAG models, and to use the notation most commonly associated with probabilistic DAGs in a mixed probabilistic/causal setting can cause confusion.