One can compute the distance between probability measures $\mu$ and $\nu$ using different types of metrics (when their support lies in the same dimension). For instance, in a finite-dimensional setting, one can measure the distance between two multivariate Gaussian densities $f_1\sim \mathcal{N}^{(k)}(\boldsymbol{\mu}_1,\boldsymbol{\Sigma}_1)$ and $f_0\sim \mathcal{N}^{(l)}(\boldsymbol{\mu}_0,\boldsymbol{\Sigma}_0)$ as $$d(f_1\vert\vert f_0)=\sqrt{2D_{KL}(f_1\vert\vert f_0)}$$ where, $D_{KL}(\cdot\vert\vert\cdot)$ is the Kullback-Leibler divergence and $k,l$ are the dimensions associated to each density. This only works when $k=l$, because $D_{KL}(\cdot\vert\vert\cdot)$ is not defined otherwise.
Therefore, is there a ''metric'' or an extension of traditional metrics (e.g., Lévy-Prokhorov, total variation, Wasserstein) to compute the ''distance'' (or some kind of similarity indicator) between probability measures/densities that are supported on different dimensions (i.e., for $k\neq l$) ?
Thanks in advance !