Consider a random variable $X$ taking values in $[0, 1]$ with unknown distribution.
Consider the set $S_n = \left \{\frac{1}{n}, \frac{2}{n}, ..., \frac{n-1}{n} \right \}$ to be used to quantize $X$. The following is the standard quantization function $f_n : [0, 1] \to S_n$
$$f_n(X) = \min_{s \in S} |X - s|$$
So the worst case quantization error we get decays as $O\left(\frac{1}{n} \right)$.
Suppose we measure multiplicative error as
$ e(x) = \frac{x}{f_n(x)} $
I think the worst possible value of $e(x)$ can be if $x = \frac{3}{2n}$ $\implies $ $f_n(x) = \frac{1}{n}$ so that
$$e\left( \frac{3}{2}n \right) \text{ is always } = \frac{3}{2} \text{ no matter how large $n$ is } $$
But one usually expects that large $n$ leads to finer quantization leads to smaller error. Additive error does go down but multiplicative error remains the same.
Question: Is there a different quantization set $S_n$ or mapping $f_n : [0, 1] \to S_n$ which results in decaying multiplicative error as $n \to \infty$ ?
As you have defined $e(x)$ it can range up to $\frac 32$ if $x$ is just smaller than $\frac 3{2n}$ because then $f_n(x)=\frac 1n$. But it can range down to $0$ if $x$ is $0$ which is multiplicatively infinitely far away from $\frac 1n$.
You can't solve the problem with $0$ with your definition of error, but you can move more points of $S_n$ downward. A natural approach is to make the ratio between successive elements of $S_n$ constant. You still need to define what the smallest member is and there doesn't seem to be any natural way to choose it because $0$ is infinitely far away from the smallest element. Take $n=2$ so $S_2=\{a^2,a\}$. Should $a$ be $\frac 12, \frac 1{10}, \frac 1{10000}$ or what?