I have two continuous random variables X and Y and I know that their mutual information is quite small (less than 0.1 nats).
I understand that for discrete random variables, mutual information, and in particular, its normalized version with respect to the maximum entropy among X and Y gives an adequate characterization of the "amount of dependence" between X and Y:
$$\frac{I(X;Y)}{\min[H(X),H(Y)]}$$
However, for continuous random variables, their corresponding entropies are typically infinite and the above ratio goes to zero. Is there a way to characterize the "amount of dependence" between such two continuous random variables based on mutual information and argue that one can consider them independent without introducing a significant error in his/her analysis?
This "measure" of dependence between $X$ and $Y$ you are proposing, apart from being unconventional, does not work as you expect. The ratio can be arbitrarily close to zero even in cases of dependent variables.
For example, consider the binary random variable $X \in \{a, b\}$ with $\mathbb{P}(X=a)=\mathbb{P}(X=b)=1/2$ and the random variable $Y$ which, given $X=a$, takes one of the values in $\{1,2,\ldots,M\}$ with equal probability and, given $X=b$, takes one of the values in $\{M+1,M+2,\ldots,2 M\}$ with equal probability. Note that $X$ and $Y$ are "highly dependent" in the sense that knowledge of $Y$ implies perfect knowledge of $X$.
It is easy to see that $I(X;Y)=1$ (bits). Also note that $Y$ is uniformly distributed over $\{1,2,\ldots,2M\}$, therefore, $H(Y)=\log_2 (2M) \rightarrow \infty$ for $M \rightarrow \infty$, resulting in a ratio arbitrary close to zero.