I have a question based on an exercise from Grimmett's & Strizaker's book, "Probability and Random Processes".
I don't understand the fact that
$$ \mathbb{P}(A \cap B | T) = \mathbb{P}(A | T)\mathbb{P}(B | T) \qquad \& \qquad \mathbb{P}(A \cap B | T^c) = \mathbb{P}(A | T^c)\mathbb{P}(B | T^c) $$
The conditional independence of $A$ and $B$, given $T$ or $T^c$, is being attributed to the independence of $A$ and $B$. However, I've seen that independence does not imply conditional independence.
So, my questions are :
Why are the above equations correct ? (can we show their validity in a more mathematical strict way ?)
Generally, when can independence imply conditional independence, as in this case ?
Thank you in advance

There seems to be an implicit assumption that the reliability of witness A, the reliability of witness B and whether T occurred are all independent, i.e. both unconditional and conditional independence. With that assumption, the book's argument is valid.
Without that implicit assumption, it is possible to construct a counter-example, as you suspected. Consider the following case with $\alpha=\beta=0.9$ and $\mathbb P(T)=0.001$
The two reliabilities are unconditionally independent but here $\mathbb P(T \mid A \cap B) =\frac{1}{11} \not= \frac{81}{1080}$, i.e. $0.0909\ldots \not= 0.075$.
Since $\frac{1}{11}$ is still small, the book's conclusion "somewhat small for a judicial conclusion" is still reasonable and this is in fact the highest possible value for $\mathbb P(T \mid A \cap B)$ with unconditional independence of the two reliabilities.