Many papers on locality sensitive hashing, sketching and similar use the following lemma:
If $r\in\mathbb{R}^d$ is a random vector with all entries normally, independently distributed as $\mathcal{N}(0,1)$, and $x,y\in\mathbb{R}^d$ are arbitrary vectors, then $\Pr[\text{sign}(r\cdot x)=\text{sign}(r\cdot y)]=\frac{\arccos(x\cdot y)}{\pi}$.
It's not too hard to see why this is correct. It is proved, for example, in Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming by a simple geometric argument: $r$ is a random plane uniform with respect to rotation. The two signs are equal exactly when they are on the same side of the plane.
What I'm wondering about is whether there is a non-geometric proof. I'm interested in this because I'm wondering what happens if we use another p-stable distribution than the gaussian, e.g. a Cauchy distribution. Such cases don't seem to have as obvious a geometrical interpretation.
Is there a way to prove the lemma just using properties of the normal distribution?