simulate coin tossing by die tossing

117 Views Asked by At

On the one hand we toss $n$ times a fair coin, and we sum the outcomes (+1 for heads, 0 for tails). Let $f:\mathbb{N}\to\mathbb{R}$ describe the probability distribution of the outcome.

On the other hand we toss $m$ times a fair die with $k$ sides, and we sum the outcomes (outcomes in $\{+0,+1,+2,\dots,+k-1\}$). Let $g:\mathbb{N}\to\mathbb{R}$ describe the probability distribution of the outcome.

What can we say about the total variation distance between both experiments (i.e., 1-distance between $f$ and $g$), as a function of $n$, $m$ and $k$?

UPDATE: I am interested in this question since simulations indicate that throwing $m$ times a die of $O(\sqrt{n/m})$ sides allows for $f$ and $g$ to get $1/\sqrt{m}$-close in TV distance (for $n\gg m$).

It is easy to find $m$ and $k$ such that $f$ and $g$ converge weakly. And the Berry-Esseen theorem shows that the cumulative probability distributions converge pointwise as $O(1/\sqrt{m})$, if we choose $\tau\in O(\sqrt{n/m})$. This is however not sufficient to prove anything about the TV distance.

I have also tried to work with local limit theorems, which show that the probability distributions converge pointwise as $O(1/m)$ if $\tau\in O(\sqrt{n/m})$. But seeing that the support of the distributions will be $\gg \sqrt{R}$, this also seems insufficient to bound TV distance.

Any other ideas?