Rescale linear interval [a, b] to logarithmic [0, 1] scale in stead of linear [0, 1] (normalization)

1.5k Views Asked by At

I can't get my head around this simple problem. I would very much appreciate a good explanation in the answers.

I have a data array in which elements are between [a, b] and I normalize this array between [0, 1] using:

min_scale = np.min(data)
max_scale = np.max(data)
normalized_data = (data - min_scale) / (max_scale - min_scale)

Instead of normalizing to a linear [0, 1] scale, I would like to normalize to a logarithmic scale between [0, 1] in order to increase the importance of values closer to the max and decrease the importance of values closer to the minimum.

I am still looking for a convincing formula.. Does it make sense to first linearly normalize [a, b] to [1, e] and then take the log of the resulting vector?

1

There are 1 best solutions below

3
On BEST ANSWER

Your suggested transformation seems fine. You could add a degree of freedom by first linearly transforming $[a,b] \to [1,\theta]$ followed by $\log_\theta$. This gives the transformation

$$x \mapsto \frac{\ln \left(\frac{a \theta -b}{a-b}+\frac{(1-\theta ) x}{a-b}\right)}{ \ln (\theta) }. $$