Asymptotic Gilbert-Varshamov Bound Using Hilbert's Entropy Formula

221 Views Asked by At

I am reading Walker's book Codes and Curves and am having trouble proving this Lemma regarding the Asymptotic Gilbert-Varshamov bound.

Suppose that $q$ is a prime power and we define \begin{align*} V_q(n,r) &:= \sum\limits_{i=0}^r {n\choose r}(q-1)^i \end{align*}

We define the Hilbert entropy function as \begin{align*} H_q(x) &:= \cases{0, & x= 0\\ x\log_q(q-1)-x\log_q x - (1-x)log_q(1-x), & $0 < x \leq 1-\frac{1}{q}$} \end{align*}

There is a lemma that states if $0\leq\lambda\leq 1-\frac{1}{q}$ then \begin{align*} \lim\limits_{n\to\infty}\frac{1}{n} \log_q V_q(n,\lfloor \lambda n\rfloor) &= H_q(\lambda) \end{align*}

Walker suggests using Stirling's approximation to get this limit. Here is what I have so far: First, I find that if $0<\lambda \leq 1-\frac{1}{q}$ then \begin{align*} H_q(\lambda) &= \lambda\log_q(q-1)-\lambda\log_q \lambda - (1-\lambda)log_q(1-\lambda)\\ &= \log_q\left(\frac{(q-1)^\lambda}{\lambda^\lambda(1-\lambda)^{1-\lambda}}\right) \end{align*}

Then, try to calculate $\lim\limits_{n\to\infty} \frac{1}{n}\log_q V_q(n,\lfloor \lambda n\rfloor)$. \begin{align*} \lim\limits_{n\to\infty} \frac{1}{n}\log_q V_q(n,\lfloor \lambda n\rfloor) &= \lim\limits_{n\to\infty} \log_q\left(\left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n}\right)\\ &= \log_q\left(\lim\limits_{n\to\infty} \left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n} \right) \end{align*}

Looking only at the terms inside the logarithm, I would like to show that \begin{align*} \lim\limits_{n\to\infty} \left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n} &= \frac{(q-1)^\lambda}{\lambda^\lambda(1-\lambda)^{1-\lambda}} \end{align*}

Unfortunately, I get stuck here. This stackexchange post pointed me to this resource which essentially shows the case for $q=2$ in exercise 9.42. It looks easy to generalize to this problem using the provided method. However, I do not quite understand this crucial step:

If we let $m = \lfloor\lambda n\rfloor$, then we get that \begin{align*} {n\choose m}\sum\limits_{i=0}^m \left(\frac{\alpha}{1-\alpha}\right)^i = {n\choose m}\frac{1-\alpha}{1-2\alpha} \end{align*} This step seems so simple based off of geometric series, but I cannot get my calculations into the provided form.

2

There are 2 best solutions below

0
On

Here I show that

$$\lim_{t\to \infty} \left(\sum\limits_{k=0}^{at} \frac{t^k}{k!} \right)^{1/t}= \left(\frac{e}{a}\right)^a $$

Letting $n(q-1) = t$ and $a = \frac{\lambda}{q-1}$

$$ \begin{align} \lim\limits_{n\to\infty}\left(\sum\limits_{i=0}^{\lambda n}\frac{n^i}{i!}(q-1)^i \right)^\frac{1}{n}&= \lim\limits_{t\to\infty}\left(\sum\limits_{i=0}^{at}\frac{t^i}{i!}\right)^\frac{q-1}{t}\\ &=\left(\frac{e}{a}\right)^{a(q-1)} \\ &= \left(\frac{q-1}{\lambda}\right)^\lambda e^\lambda \end{align} $$

This does not quite agree with your desired answer. Perhaps the discrepancy is due to an error in your penultimate equation, which looks wrong to me.

0
On

The trick in this is to first upper and lower bound $V_q$ by respectively $n$ and $1$ times the max term in the sum, and then take $\log$. Then the game becomes controlling this max term, which is much easier to handle. A key result needed for this is the following lemma, which can be shown using Stirling's approximation:

For any $k \in [1:n-1],$ $$ \frac{1}{n} \ln\binom{n}{k} = (1 + o_n(1)) h\left(\frac{k}{n}\right),$$ where $h(x) := -x\ln x - (1-x) \ln (1-x)$ is the binary entropy function.

You should take a pass at showing this, but see, for intance, this for both a proof and other nice asymptotics of the binomial coefficients. More precise, non-asymptotic statements are also straightforward to get. For instance, this also only uses Stirling's approximation.

Now, let $K:= \lfloor \lambda n \rfloor,$ and $$\varphi := \max_{i \in [1:K]} \binom{n}{i} (q-1)^i.$$ I'll consider the $\lambda > 0$ case, and work with $n$ large enough so that $K \ge 2.$ We have $$ \varphi \le V_q \le K \varphi \le n \varphi,$$ which implies that $$\frac{1}{n} \ln V_q = \frac{1}{n} \ln \varphi + o_n(1).$$ At this point the argument is straightforward. I urge you to take a pass yourself before reading on.


Next, it follows that \begin{align} \frac{1}{n} \ln \varphi &= \max_{i \in [0:K]} \frac{1}{n} \ln \binom{n}{i} + \frac{i}{n} \ln (q-1) \\ &= (1 + o_n(1)) \left\{\max_{i \in [0:K]} h(i/n) + (i/n)\ln (q-1) \right\}, \end{align} where the second line uses the quoted asymptotic equality.

Now note that treated as a function for any real $0 \le x \le 1-1/q$, quantity $$ \rho(x) := h(x) + x \ln(q-1)$$ is non-decreasing in $x$. Indeed, $$\rho' = \ln(q-1) + \ln(1-x/x) \ge \ln(q-1) + \ln(1/q/ (1-1/q) = 0.$$ (Aside: the $H_q$ in your question is the same as $\rho/\ln q$).

This means that $$\frac{1}{n} \ln \varphi = (1 + o_n(1)) \left( h(K/n) + (K/n) \ln(q-1) \right)$$

Finally, $K/n \to \lambda,$ and by continuity $h(K/n) \to h(\lambda)$ finishes the job.