Let's say we have a probability distribution having 20 distinct outcomes. Then for that distribution the entropy is calculated is $2.5$ while the maximal possible entropy here is then of course $-\ln(\frac{1}{20}) \approx 3$.
How can I describe that $2.5$ is quite a high entropy given the number of possible outcomes? It is hard to get a feel for it just stating that it is $2.5$, my gut tells me it would be ok to simply divide the entropy by the maximum entropy possible resulting in a number between 0 and 1; in this case $\frac{2.5}{-\ln(\frac{1}{20})} \approx 0.83$. Is this a valid way of calculating it (since this is not a linear but logarithmic scale)? Has this been done before?
Given a probability distribution $p$ with $n$ distinct outcomes $x_i$, the quantity $$\frac{H}{H_\text{max}} = \frac{-\sum_{i=1}^n p(x_i) \log(p(x_i))}{-\sum_{i=1}^n \frac{1}{n} \log(\frac{1}{n})} = -\sum_{i=1}^n \frac{p(x_i) \log(p(x_i))}{\log(n)}$$ is sometimes called as the efficiency, or the normalized entropy.
Wikipedia has a short paragraph on this, but other than that I haven't found any good references. Some searches on google scholar show several papers using this terminology, but nothing discussing it in depth.
Also note that by the rules for change of base of logarithms, the base used does not matter; we have: $$\frac{\log(x)}{\log(n)} = \log_n(x) = \frac{\log_b(x)}{\log_b(n)}$$ for any base $b$.