Why is the capacity according to the Shannon-Hartley theorem independent of the bandwidth's center frequency?

636 Views Asked by At

According to the Shannon-Hartley theorem the capacity $C$ of a channel which has a signal-to-noise ratio of $\frac SN$ and a bandwidth $B$ can be calculated as following: $$C=B\log_2\left(1+\frac SN\right)$$

However, my intuition tells me that 1 kHz of bandwidth centered around the frequency 1 GHz should be able to provide a higher capacity than 1 kHz of bandwidth centered around the frequency 1 MHz. The reasoning behind this being that the 1 GHz center frequency changes faster and should therefore be able to convey more signal elements per second than the 1 MHz carrier.

My question is then:

How come the capacity $C$ according to the Shannon-Hartley theorem is independent of where the bandwidth $B$ has its center frequency?

Thanks for your time! If you think i should add/remove any tags or ask this question in another stackexchange community, please let me know.

2

There are 2 best solutions below

0
On BEST ANSWER

Consider a baseband signal

$$x_\text{BB}(t) \in \mathbb{C}, t \in \mathbb{R},$$

of (single-sided) bandwidth $B>0$ (in Hz) and its passband version

$$x_\text{PB}(t)\triangleq \Re\{x_\text{BB}(t) e^{ i 2 \pi f_c t}\} \in \mathbb{R}, t\in\mathbb{R},$$

where $f_c\gg B$ is the carrier frequency. Although, technically, the bandwidth of $x_\text{PB}(t)$ equals $f_c+B$, which implies greater "time variation", the information that can be transmitted via $x_\text{PB}(t)$ cannot be greater than that of $x_\text{BB}(t)$. You can see that by noting the followng.

  • $x_\text{PB}(t)$ has no frequency components in the range $[0,f_c-B)$, therefore, it does not "vary" the same way as a signal actually transmitting information via all the frequencies in the interval $[0,f_c+B]$.
  • The bandwidth expansion achieved via $x_\text{PB}(t)$ is artificially introduced and deterministic. Indeed, the bandwidth expansion is only due to the deterministic mapping $x_\text{BB}(t) \rightarrow x_\text{PB}(t)$, which is, of course, an operation that is known to the receiver (i.e., carries no information) and its effect can be reversed via the well-known baseband conversion procedure of mixing and low-pass filtering. This approach recovers the original baseband signal $x_\text{BB}(t)$ and, in the case of white Gaussian noise, it can be shown that it is a so called information-lossless operation. The baseband conversion at the receiver effectively transforms the passband transmission into the baseband transmission of $x_\text{BB}(t)$, whose bandwidth $B$ is what determines capacity, independent of the used carrier frequency $f_c$.
1
On

No. Each frequency is the same in what it can convey. Each sinusoidal frequency in the time domain can convey information in its phase and amplitude. When you modulate the signal to a different centre frequency, you offset the frequency of sinusoids that are being used, but the range remains the same.

You probably think that the increased frequency means that the data rate could increase, but it doesn't. The time domain symbol is still the same length, but it now contains higher frequency sinusoids. It is the length of the symbol that determines data rate, and this is inversely proportional to bandwidth.

The capacity of a baseband channel is:

$$C=2B\log_2(M)=2B\log_2 \Big(1+\frac S N\Big) $$

Where B is the bandwidth of the baseband channel (single lobe).

The capacity of a real passband channel with the same bandwidth as the baseband channel B, then

$$C=2B\log_2(M) = \frac 1 2 2B\log_2 \Big(1+\frac S N\Big) $$

Because only half of the passband bandwidth is used, therefore the number of possible levels that can fit into a passband of the same bandwidth is square rooted.

If the passband channel is complex i.e. the time domain is able to be complex, then the capacity is

$$C=2B\log_2(M) = 2B\log_2 \Big(1+\frac S N\Big) $$