According to the Shannon-Hartley theorem the capacity $C$ of a channel which has a signal-to-noise ratio of $\frac SN$ and a bandwidth $B$ can be calculated as following: $$C=B\log_2\left(1+\frac SN\right)$$
However, my intuition tells me that 1 kHz of bandwidth centered around the frequency 1 GHz should be able to provide a higher capacity than 1 kHz of bandwidth centered around the frequency 1 MHz. The reasoning behind this being that the 1 GHz center frequency changes faster and should therefore be able to convey more signal elements per second than the 1 MHz carrier.
My question is then:
How come the capacity $C$ according to the Shannon-Hartley theorem is independent of where the bandwidth $B$ has its center frequency?
Thanks for your time! If you think i should add/remove any tags or ask this question in another stackexchange community, please let me know.
Consider a baseband signal
$$x_\text{BB}(t) \in \mathbb{C}, t \in \mathbb{R},$$
of (single-sided) bandwidth $B>0$ (in Hz) and its passband version
$$x_\text{PB}(t)\triangleq \Re\{x_\text{BB}(t) e^{ i 2 \pi f_c t}\} \in \mathbb{R}, t\in\mathbb{R},$$
where $f_c\gg B$ is the carrier frequency. Although, technically, the bandwidth of $x_\text{PB}(t)$ equals $f_c+B$, which implies greater "time variation", the information that can be transmitted via $x_\text{PB}(t)$ cannot be greater than that of $x_\text{BB}(t)$. You can see that by noting the followng.