I'm having trouble with a pretty basic idea- I just don't know how to argue that two events are independent. I know the definition: $$P(A)P(B)=P(A\cap B)$$ but if we are given two events, like the roll of two different dice, how do we find $P(A\cap B)$? I know that when I was taught independence of events they would have me write out the probability chart, but in the case of things that aren't as small and easy to work with as dice, do we simply argue/assume things are independent and work from there?
It's very clear when two things aren't independent most of the time, but I'm having trouble thinking of how to prove things are independent. For instance, in the coupon collecting problem we say that the waiting time for each coupon are independent geometric random variables, how would we explicitly calculate $P(X_1 \cap X_2)$ where $X_1$ and $X_2$ are the waiting times for the first and second coupon respectively?
An equivalent formula for independence that is closer to how we reason is $$P(A|B) = P(A|\bar{B}) = P(A).$$
With regards to random variables, saying that $X$ and $Y$ are independent amounts to saying that the distribution of $X$ given knowledge about $Y$ (i.e., given that $Y \in C$ for some Borel set $C$) is the same.
In the dice example, it is part of the physical understanding of the system that the roll of the first die has no impact on the result of the second one, thus, the distribution of the second roll must be invariant under any knowledge about the first.
The idea that the behaviour of the second roll is the same regardless of the first is not mathematical. Mathematics only explains the implications of such assumption on the stochastic model.
In the coupon collector's problem, we have a purely mathematical situation. Here we can make a precise argument, showing that the process of finding the third coupon is essentially the same regardless of whether the second one was found after 1, 2, 3 or $n$ attempts.