Independence of sums of uniform random variables modulo 1

352 Views Asked by At

Let $U,V,W$ be independent $U(0,1)$ random variables. I want to show that \begin{align*} (U+V) \,\text{mod}\,1\quad\text{and}\quad (U+W) \,\text{mod}\,1 \end{align*} are independent random variables, where $\text{mod}\, 1$ denotes the remainder term when dividing by 1. I have three suggestions that I am not sure about:

  1. Intuitive method. It is easy to see that the distribution of $(U+W) \,\text{mod}\, 1$ is $U(0,1)$. Then, due to symmetry, the value of $(U+W)\,\text{mod}\, 1$ contains no information about the value of $U$, so it contains no information about $(U+V)\,\text{mod}\, 1$, so they are independent.

  2. I feel like there should be a nice geometric interpretation of this result because of the way $(U+V)\,\text{mod}\,1$ wraps around 1, resulting in it having a $U(0,1)$ distribution.

  3. Brute force method. Condition on the random variable $U$: \begin{align*} & P((U+V) \,\text{mod}\,1\in[a,b] \, | \,(U+W) \,\text{mod}\,1\in[c,d])\\ = & \int_{u=0}^1P((u+V) \,\text{mod}\,1\in[a,b] \, | \,(u+W) \,\text{mod}\,1\in[c,d])du\\ = & \int_{u=0}^1P(u+V\in k_0+\alpha_0[a,b] \, | \, u+W \in k_1+\alpha_1[c,d])du\quad[k_i\in\{0,1\},\alpha_i\in(0,1)]\\ = & \int_{u=0}^1P(V\in k_0+\alpha_0[a,b]-u \, | \,W \in k_1+\alpha_1[c,d]-u)du\\ = & \int_{u=0}^1P(V\in k_0+\alpha_0[a,b]-u)du\quad [V\text{ and } W\text{ independent}]\\ = & \int_{u=0}^1P(u+V\,\text{mod}\, 1\in[a,b])du\\ = & P(U+V\,\text{mod}\, 1\in[a,b]). \end{align*} From this it follows that $(U+V)\,\text{mod}\, 1$ and $(U+W)\,\text{mod}\, 1$ are independent.

1

There are 1 best solutions below

0
On BEST ANSWER
  1. The intuitive method is okay. $(U+V\bmod 1)$ and $(U+W\bmod 1)$ are uniformly distributed random variables, $V,W$, that have been shifted by the same uniformly distributed random variable, $U$, then had the modulus for $1$ taken (also that $1$ is the size of the support of each variable).

  2. The geometric interpretation is easy, if you can imagine wrapping space around a hypercylinder.

  3. For $0\leq \nu, \omega\leq 1$ we just need to consider the joint and marginal CDF.   If we show independence by that, we have proven it for any intervals.

The key fact is that if you shift and modulate an interval within the support, the size of the image remains the same, and because $U, V,$ and $W$ are uniformly distributed, so the probability measure is independent of the shift.

$$\begin{align}\mathsf P(U+V\bmod 1\leqslant\nu) &= {{\mathbf 1_{\nu\in[0;1]} \cdot\int_0^1\mathsf P(u+V\in[0;\nu]\cup[1;1+\nu])f_U(u)\mathop{d}u} + {\mathbf 1_{\nu\in(1;\infty)}}} \\[1ex]&= {{\mathbf 1_{\nu\in[0;1]} \cdot\int_0^1\mathsf P(V\in[0;\max(0,\nu-u)]\cup[1-u;\min(1,1+\nu-u)])f_U(u)\mathop{d}u} + {\mathbf 1_{\nu\in(1;\infty)}}} \\[1ex] &=~\nu\cdot\mathbf 1_{\nu\in[0;1]}+ \mathbf 1_{\nu\in(1;\infty)} \\[2ex] \mathsf P(U+W\bmod 1\leqslant \omega) ~&=~\omega\cdot\mathbf 1_{\omega\in[0;1]}+\mathbf 1_{\omega\in(1;\infty)} \\[2ex] \mathsf P(U+V\bmod 1\leqslant \nu, U+W\bmod 1\leqslant \omega)~&=~ {{\nu\omega\cdot\mathbf 1_{\nu\in[0;1],\omega\in[0;1]}}+{\nu\cdot\mathbf 1_{\nu\in[0;1], \omega\in(1;\infty)}}+{\omega\cdot\mathbf 1_{\nu\in(1;\infty), \omega\in[0;1]}}+{\mathbf 1_{\nu\in(1\infty),\omega\in(1;\infty)}}}\end{align}$$