I know that given these two random variables (which correspond to the $x$ and $y$ coordinates of a random walk after $n$ steps, their joint probability density function can be $approximated$ by a normal density of two independent variables.
$$ X_n = \sum_{k=1}^n \cos(\theta_k) $$
$$ Y_n = \sum_{k=1}^n \sin(\theta_k) $$
But I'm not sure why you can do this. These aren't independent random variables, as far as I understand. Why can you approximate their joint pdf by a normal density of two independent variables?
Obviously, it only works for large $n$. Heuristically, the explanation is that although $X_n$ and $Y_n$ are not actually independent, after a large number of steps $n$, the information about the individual horizontal components of $X_n$ is essentially lost, so that little can be deduced about the distribution of $Y_n$, except that $n$ and $X_n$ together bound $Y_n$ (and that notion is already in the joint normal distribution).
Further, suppose you knew all of the horizontal components of $\theta_k$. You would still not know the sign of the vertical components of $\theta_k$. A general notion of the central limit theorem then applies, and the distribution of $Y_n$ is essentially normal for large $n$.