Say $X,Y,Z \sim U[0,100]$ and define events $A$ and $B$ by $A =|X-Y|\leq 10$ and $B=|X-Z| \leq 10$. My professor says that these two events must be dependent because $X$ is in both events. But I want to understand why that is for this example specifically. Instinctively $X$ and $Y$ can be "close" anywhere on the interval (and are equally likely to occur anywhere on it), so I wouldn't see why $X$ and $Y$ being close would affect the likelihood $X$ and $Z$ were as well. However, in doing simulations I do indeed observe a slight dependence.
Is the reason because there isn't symmetry for either event at both endpoints? We would expect A and B to be less likely to occur (in repeated samples) when X,Y are close to 0 and 100 because there won't be observations less than 0 or greater than 100? In case that's confusing: $|X-Y|\leq 10 \iff X-10 \leq Y \leq X+10$, and near the endpoints of the interval one of these equalities is more likely to hold than the other. So for $P(A|B)$, you are looking at a somewhat "skewed" sample for $X$ (in a simulation, the group of $X$ draws where $A$ happens wouldn't be purely uniform no matter how large $n$ is). This is the only explanation I can think of.
A different post says "If the range of one random variable varies according to values of other random variables, then they are not independent", which makes sense (and please feel free to point me towards related discussion in Stats books) but again I want to have an explanation specific to these events
(sorry if this is a basically duplicate; couldn't find after searching)
You are correct that it is due to issues at the end-points, and that the effect is small. You should be able to see that $A$ and $B$ are conditionally independent given $X=x$ for any $x$ in the support of $X$. One approach to finding the probabilities is to say
Integrate these over the uniform distribution of $X$ and you get $\mathbb P(|X-Y|\leq 10)=\mathbb P(|X-Z|\leq 10) =\frac{19}{100}$ and $\mathbb P(|X-Y|\leq 10,|X-Z|\leq 10) =\frac{11}{300} \approx 0.036667$ which is slightly more than $\mathbb P(|X-Y|\leq 10)\mathbb P(|X-Z|\leq 10) =\left(\frac{19}{100}\right)^2= 0.0361$, showing there is not unconditional independence and the correlation is slightly positive.
One way of trying to explain this might be to say the the quadratic mean (root mean square if you prefer) is always greater than the arithmetic mean unless all the values are equal, and in this case the probabilities near the end points are less than those in the middle. But this is not specific to a uniform distribution: you would get a similar effect with most other distributions, so you could say the effect is due to the presence of a random $X$ in the definition of both events.