I noticed this while working on another problem. My intuition is that the statement is true, but I am not sure.
Let A is an event. Evidence 1 and 2 are $E_1$ & $E_2$ correspondingly. $$P(A|E_1) = \frac{P(E_1|A)}{P(E_1)} P(A)$$ $$P(A|E_2) = \frac{P(E_2|A)}{P(E_2)} P(A)$$ Since either $E_1$ & $E_2$ increases the chance of A. $\frac{P(E_1|A)}{P(E_1)}>1$ & $\frac{P(E_2|A)}{P(E_2)}>1$. $$P(A|E_1,E_2) = \frac{P(E_2|A,E_1)}{P(E_2|E_1)}P(A|E1) = \frac{P(E_2|A,E_1)}{P(E_2|E_1)}\frac{P(E_1|A)}{P(E_1)} P(A)$$ If $E_1$ & $E_2$ are independent: $$ \frac{P(E_2|A,E_1)}{P(E_2|E_1)} = \frac{P(E_2|A)}{P(E_2)} >1 \Rightarrow P(A|E_1, E_2) > P(A)$$ This makes that both $E_1$ and $E_2$ increase the chance of A. Thus, to preclude the chance of A, it must be that $E_1$ and $E_2$ are dependent, $\frac{P(E_2|A,E_1)}{P(E_2|E_1)}<1$ and it overpowers $\frac{P(E_1|A)}{P(E_1)}$. I am not confident with the contradiction I make. Did I miss anything?
Thanks,
I'm concerned about this step: You claim that if $E_1$ and $E_2$ are independent, then
$$ \frac{P(E_2 \mid A, E_1)}{P(E_2 \mid E_1)} = \frac{P(E_2 \mid A)}{P(E_2)} $$
presumably by equating numerator with numerator, and denominator with denominator. I see the denominators being equal, but not the numerators. For instance, suppose $E_1, E_2, A$ all have probability $1/2$, and all are pairwise independent—but, either exactly one of them occurs, or all three of them jointly occur, all with probability $1/4$. Then $P(E_2 \mid A, E_1) = 1$, but $P(E_2 \mid A) = P(E_2) = 1/2$.
Here's a counterexample to the original problem:
Please excuse the revolting color scheme; I'm using an old tool and I'm still working out how to add more colors. Broadly speaking, $A$ is represented by the bottom half of the square, $E_1$ is a bottom-heavy trapezoid toward the left, and $E_2$ is a bottom-heavy trapezoid toward the right. They intersect as shown above.
Graphically, the basic idea is that $E_1$ and $E_2$ are individually bottom-heavy, so they increase the probability of $A$. But their intersection, the joint event $E_1, E_2$, is top-heavy, so it decreases the probability of $A$. It remains only to check that $E_1$ and $E_2$ are indeed independent.
The probability table is as follows:
$$ \begin{array}{c|c} & P(\cdot) \\ \hline \emptyset & \frac{75}{256} \\ E_1 & \frac{15}{256} \\ E_2 & \frac{15}{256} \\ A & \frac{25}{256} \\ E_1, E_2 & \frac{23}{256} \\ E_1, A & \frac{45}{256} \\ E_2, A & \frac{45}{256} \\ E_1, E_2, A & \frac{13}{256} \end{array} $$
Note first that
$$ P(A) = \frac{25+45+45+13}{256} = \frac{1}{2} $$ $$ P(E_1) = P(E_2) = \frac{15+23+45+13}{256} = \frac{3}{8} $$
Now,
$$ P(E_1, E_2) = \frac{23+13}{256} = \frac{9}{64} = P(E_1)P(E_2) $$
so $E_1$ and $E_2$ are independent. Next,
$$ P(A \mid E_1) = \frac{P(A, E_1)}{P(E_1)} = \frac{45+13}{15+23+45+13} = \frac{29}{48} > \frac{1}{2} = P(A) $$
and similarly for $P(A \mid E_2)$. But, on the other hand,
$$ P(A \mid E_1, E_2) = \frac{P(A, E_1, E_2)}{P(E_1, E_2)} = \frac{13}{23+13} = \frac{13}{36} < \frac{1}{2} = P(A) $$
So $E_1$ and $E_2$ individually increase the probability of $A$, but jointly decrease it.