I recently came across this interesting fact:
- Take some (pseudo)random numbers between $0$ and $1$.
- Now sum these numbers and count how many are needed in order for the sum to be greater than $1$.
- If you repeat the experiment $n$ times, you will see that, on average, you need $\mathrm{e}$ (Euler's number) numbers!
My mind is blown! But why? How does this work?