Considering $N_s$ source symbols $v$ with PDF given as,
$p(v)= e^{-v}I_{[0,\infty]}(v)$
The $k$-bits quantizer $Q$ maps the $N_s$ source symbols $v$ to symbols $s$, $s \in \{0,..., 2^k-1\}$ The quantization interval is equally spaced at $ln(2)\cdot s$.
I tried to find the entropy of the quantization symbols s given k as following,
$H_k= -\sum_{s=0}^{2^k-1}p(s)log_2(p(s))$
I defined $p(s) = 0.5 e^{-ln(2)s}$ for $s \in \{ 0,...,2^k-2\}$ and $p(s) = e^{-ln(2)s}$ for $s = 2^k-1$
$H_k= -\sum_{s=0}^{2^k-2} 0.5 e^{-ln(2)s}log_2( 0.5 e^{-ln(2)s}) + e^{-ln(2)2^k-1}log_2(e^{-ln(2)2^k-1})$
then I plot $H_k$ as a function of $k \in[1,100]$ and got this image of H_k the entropy plot shows that $H_k$ become constant = 2 efter around k=10. How do I interpret this result. Why do the entropy become a constant, is the entropy the lower bound of bits that is required for representing source symbols? does this means that as we increasing number of k it does'nt effeckt the redundancy? or did i make any mistake in my calculation?
Perhaps you are overthinking things. Do you understand how entropy is calculated in general? Do you understand that if all of the symbols are equally probable then the binary entropy simplifies to $\log(w)/\log(2)$, where $w$ is the distinct symbol count?
Here is a code: