We know that for a continuous random variable $X$ with density $f$, $$Var(X) \geq \frac{e^{2h(f)}}{2\pi e},$$ where $h(f)$ is the differential entropy. This follows from Eq (8.80), Theorem 8.6.6 of Thomas & Cover Information Theory book:
Can we get a similar lower bound for a discrete random variable? Let's say $X\in\{-N,\cdots,-1,0,1,2,\cdots,N\}$ for finite N with some probability mass function $p(.)$.
We can't apply the argument in eq.(8.80) above since uniform distribution is the maximum entropy for discrete random variables! We know that $Var(X)=0$ if and only if X is degenerate. So I think we can get a lower bound in this case as well, but I am not able to come up with anything. :(
Thanks for any help in advance!
Edit 1: From here, there doesn't seem to be a closed-form known discrete distribution for the maximum entropy when variance is known.

I think you have two paths:
If you go the variance path (so that eqs 8.77 to 8.79 remain untouched), then you get the "Discrete Normal distribution", which is the maximum entropy discrete distribution given mean and variance. This is esentially the same as in the continuous case, the pmf (assuming zero mean) has the form $$p(x) = a \exp (- \frac{x^2}{2 \sigma^2})$$ where $\sigma^2$ is not the variance but it's related to it (and in most cases it's near it) and $a$ is a normalization constant. Unfortunately, the exact values of the parameters (their relationships with variance and mean), and also the resulting entropy, have no simple closed form. For details see Szabłowski, P. J., Discrete Normal distribution and its relationship with Jacobi Theta functions, Statistics & Probability Letters, 52(3), 289–299 (2001). and references therein.
Alternatively, if instead of fixing the variance you fix the support of the discrete variable, so that $X\in\{-N,\cdots,-1,0,1,\cdots,N\}$, the natural analogy would use, not the variance ($L_2$ norm) but the $L_{\infty}$ norm (maximum) (see eg). Then, denoting $|| X||_p = \{ E |X|^p \} ^{1/p}$, instead of the eqs 8.77 $\cdots$ you'd have:
$$ \begin{align} || X - \hat X ||_\infty &\ge \min_{\hat X} || X - \hat X ||_\infty \\ &= N \\ & \ge \frac{2^{H(X)}-1}{2} \end{align} $$
where $H(X)$ is the Shannon entropy in bits, and we've used the property $H(X) \le \log (2 N +1)$