hypersphere packing and Shannon capacity

52 Views Asked by At

Shannon claims that "We can also send at the rate $C$ with arbitrarily small $\epsilon$" [1], where

$C=Wlog_2(\frac{P+N}{N})$

$C$ is the channel capacity in bits/second

W is bandwidth

P is signal power

N is noise power

he proves this by mapping the signal function dependent on t, into $2WT$ dimensional signal space ($T$ is the number of samples). From there he posits that that each signal lies inside a hypersphere of radius $\sqrt{2WT(P+N)}$, but since the signal is added with the noise, each signal can be thought of a sphere of radius $\sqrt{2WTN}$. To be decoded without error, the signals must map directly to the messages, so the noise spheres can't overlap. Thus, volume of outer sphere divided by volume of noise spheres gives channel capacity.

$$ M \leq (\sqrt{\frac{P+N}{N}})^{2WT} $$

$$ C = \frac{log_2M }{T}\leq Wlog_2(\frac{P+N}{N}) $$

which proves as an upper limit - makes sense. But then he proves that you can code arbitrarily close to this value, which doesn't make sense to me because when packing spheres, there has to be empty space, right? does that not apply in higher dimensions? If there is empty space, then the volume of a large sphere divided by the volume of a small sphere would tell you the ratio of volume, not how many non-overlapping spheres you can pack. It seems to me they'd have to overlap.

[1] C.E. Shannon, Communication in the Presence of Noise, DOI: 10.1109/JRPROC.1949.232969