It is quite easy to bijectively map the whole of $\mathbb{N}$ to an arbitrary subrange in $\mathbb{R}$. For instance:
$f(x) = \frac{1}{(x+1)}$
neatly maps all the natural numbers into the range $[0\ldots1]$. However, the density of this mapping is not very uniform. For any $\epsilon$, I can always find only a finite number of $x$s that get mapped to a number greater than $\epsilon$. Casually speaking, most of the mapped numbers cluster near the $0$.
I can also trivially find a mapping that does not show this clustering, like $f(x) = x/2$, but now my target range is no longer finite.
It is my impression that this is always the case, that any mapping from $\mathbb{N}$ to a finite range in $\mathbb{R}$ will always have one or more cluster points, where most of the mapped numbers end up.
Is it possible to construct a bijective mapping that shows a uniform density of the mapped points - that is, no subrange of the target range will contain infinitely more points than another? If not, what would be a convincing argument (preferably something that a high-schooler can understand) that this is not possible?
Consider the sequnce:$$0,1,\frac12,\frac13,\frac23,\frac14,\frac34,\frac15,\frac25,\frac35,\frac45,\frac16,\ldots$$It satisfies both conditions that you are interested in.