This is a problem to show R and $\Psi$ are independent random variables.
let $K = \{x=(x_1, x_2)\in \mathbb{R^2}: |x|\leq1\}$ be the unit disc. Let $Z=(Z_1,Z_2)$ be a K-valued random variable on an arbitrary probability space $(\Omega, F,P)$, with uniform distribution $P=U_{K}$.
Let $R=|Z|=\sqrt{Z_1^2+Z_2^2}$ and $\Psi=arg(Z_1+iZ_2)\in [0,2\pi]$ be the polar coordinates of Z. Then we have for any $0\leq r\leq 1$, $0\leq \psi\leq2\pi$, $P(R\leq r, \Psi\leq \psi) = \frac{\pi r^2\psi}{\pi 2\pi} = P(R\leq r) P(\Psi\leq \psi)$ .
I would like to confirm the meaning of Z being 'K-valued' random variable. Is it just $Z:\Omega\rightarrow K$?
I am not very sure with the equality $P(R\leq r, \Psi\leq \psi) = \frac{\pi r^2\psi}{\pi 2\pi}$. I know $P(R\leq r, \Psi\leq \psi) = \frac{|R\leq r \bigcap \Psi\leq \psi|}{|\Omega|}$, But I found myself thinking with the assumption that both are independent. For example for the numerator I am assuming a multiplicative relationship like $|(x_1, x_2):x_1,x_2<r|* |[0,\psi]| = (\pi r^2)*(\psi)$.
So how should it really be to write down $P(R\leq r, \Psi\leq \psi)$ without assuming independence? Or is this the natural way to think for a probability measure with Uniform Distribution?
Yes.
Since $P$ is assumed uniform, then for a measurable $A \subset K$, we have $PA = {mA \over mK}$. We have $mK = \pi $. It is not too hard to see that $m\{x \in K | |x| \le r, \text{ arg } x \le \psi\} = \pi r^2 {\psi \over 2 \pi}$ (with $r \in [0,1], \psi \in [0, 2 \pi]$, of course). Since $m\{x \in K | |x| \le r\} = \pi r^2$ and $ m\{x \in K |\text{ arg } x \le \psi\} = \pi{\psi \over 2 \pi}$, we see that the corresponding probabilities satisfy the independence relationship.