While for $n=2$ it is pretty clear that the contours of a non-constant $f:\mathbb R^n\to\mathbb R$ are either extrema (and therefore points) or (the union of) 1-dimensional isolines, for $n=3$ I am already not sure whether besides points and 2-dimensional isosurfaces there is also the possibility of (1D) isolines or not.
I tried starting with a map $\gamma:\mathbb R^m\to\mathbb R^n$ of a contour with $0\leqslant m\leqslant n$, and demanding
$$0 \stackrel!= df(\gamma_k(s_j)) = \sum_{k=1}^n\sum_{j=1}^m \partial_{\gamma_k} f(\vec\gamma(\vec s))\cdot \partial_{s_j}\gamma_k(\vec s)\cdot ds_j =: f_k\gamma^k_{\hphantom{k}j}ds^j.$$
For a non-vanishing gradient $f_k\neq\vec0$ (more correctly $(f_k)_k\neq\vec 0$, but I'll omit the $(\cdot)_k$ etc. for brevity) the arbitrariness of the $m$ infinitesimal elements $ds_j$ (otherwise $m$ was chosen too large) means $f_k$ must be a left-Eigenvector of $\gamma^k_{\hphantom{k}j}$, which therefore has a rank $m$ lower than $n$. But is there any further statement to make in general?
edit Note that with non-constant, I mean $f$ truly depends on $n$ variables. Of course as Anthony Carapetis showed you can extrude the function to an arbitrary amount of additional dimensions upon which $f$ does not actually depend, e.g. $f(x,y,z) = g(x,y)$ which simply adds $z$ to the level set's variables. So the constraint I'm putting on $f$ is $$\forall\vec x\in\mathbb R^n\forall k\in\{1,...,n\}\exists\epsilon>0\exists\vec y\in B_n(\vec x,\epsilon): \partial_k f(\vec y)\neq 0$$
At points $\mathrm{d}f \neq 0$, the level sets are (locally) codimension 1. In fact, for mapping $f: \mathbb{R}^n \to \mathbb{R}^k$, if $\mathrm{d}f$ is surjective, the level sets are (locally) codimension $k$. This is the content of the Implicit Function Theorem. Part of the reason that this is true is that if $\mathrm{d}f(x): \mathbb{R}^n \to \mathbb{R}^k$ is surjective with $k$ finite, then by continuity for all $y$ where $|y-x|$ is sufficiently small, $\mathrm{d}f(y):\mathbb{R}^n \to \mathbb{R}^k$ is also surjective. That is to say, when $\mathrm{d}f$ has maximal rank locally the rank must be constant.
After understanding this notion we see that the implicit function theorem has the generalisation to the Constant Rank Theorem, a version of which reads: if $f:\mathbb{R}^n\to\mathbb{R}^k$ is a smooth mapping, and if $\mathrm{d}f|_U$ has constant rank $p$, where $U$ is an open subset, then locally on $U$ the level sets of $f$ are codimension $p$ submanifolds.
Now, the rank of a continuous family of matrices is semicontinuous in one direction: a small perturbation cannot decrease the rank, but can increase it (think of the family $\lambda I$ where $I$ is the identity matrix, evaluated at $\lambda = 0$). This is why that if the rank is already maximal, the hypotheses of the constant rank theorem must be satisfied automatically. But for lower ranks, this needs to be postulated.
In particular, even in the case $f:\mathbb{R}^2 \to \mathbb{R}$, we can run into problems with saddle points. Consider the function $f(x,y) = xy$. At the origin $\mathrm{d}f = 0$, but $f^{-1}(0)$ is not a single point, it is in fact the union of the $x$ and $y$ axes. Therefore it is not even a topological manifold, and the usual definition of "dimension" cannot be applied to it.