My book's definition of independence of random variables is:
$X_1,X_2,...,X_n$ are independent if $ \forall B_i$ in the Borel sigma algebra $$P(X_1\in B_1,X_2\in B_2,...,X_n\in B_n)=\prod_{i=1}^n P(x_i \in B_i)$$
While searching about interarrivals in a Poisson Process I found a lecture note that when proving that interarrivals are independent and exponentially distributed, gives the following statement (as I understand):
If $X,Y$ are r.v. and $P(X\in B|Y=t_i)$ does not depend of $t_i$ for all $t_i$ in the range of $Y$, then $X$ and $Y$ are independent.
I think it makes sense, but can't find why.
In case someone wants to take a look at the lecture note, it is here (page 23).
Does it make sense? Any help is highly appreciated. Regards
Of course it make sense, but I'm not sure what level of proof you are looking for. At a rigourous-formal level, there might be some technical challenges: to start with, the conditional probability $P(A \mid Z)$ need some careful definition when $P(Z)=0$, because the traditional definition $P(A\mid Z) = P(A \cap Z)/P(Z)$ cannot be used. See for example : 1 2 3. In spite of this, it's quite usual and natural to condition on events of zero probability.
If we are ok with an informal approach, assuming "nice" density functions, then it's not difficult: letting $B$ and $C$ be two arbitrary Borel sets, and $\{C_i\}$ be a "fine" partition of $C$:
$$P(X \in B \cap Y \in C) = \sum_i P(X \in B \cap Y \in C_i) =\sum_i P(X \in B \mid Y \in C_i) P( Y \in C_i)$$
The assumption that $P(X\in B \mid Y=t_i)=g(B)$ (does not depend on $t_i$) leads to $P(X \in B \mid Y \in C_i)\to g(B)$ [*]. Hence the sum above can be factored: $$P(X \in B \cap Y=C) = g(B) \sum_i P( Y \in C_i) = g(B) P(Y \in C)$$
By considering the case where $C=U$ (universe), we get $g(B)=P(X\in B)$, and hence $X,Y$ are independent.
[*] This is the plausible-but-non-rigourous step.