If I define $2$ continuous probability density functions $f(x,y)$ and $g(x,y)$ for which $L^p$ and $L^q$ norms are defined for $p>1$ and $q>1$,
Is it correct to say that
$\left\Vert f\right\Vert _{p}>\left\Vert g\right\Vert _{p}\quad\quad\Rightarrow\quad\quad\left\Vert f\right\Vert _{q}>\left\Vert g\right\Vert _{q}\qquad?$
In the discrete world, that is, in $l^p$ and $l^q$, a similar claim would mean that two (probability) vectors can be compared with any norm; if a vector is "greater" than another vector based on a p-norm measurement, then it would also be greater than this other vector based on a q-norm measurement.
I have searched through various theoretical work but couldn't find an answer, in particular in the continuous world (which is the one I am interested in). In the discrete world ($l^p$ and $l^q$), I came across the notion of "Unit balls" and the claim might be true and related to the fact that the Unit balls for various p or q do not cross each other (as you can read, I am not an expert in this).
I have also tried to show it numerically for discrete $l^p$ and $l^q$ norms (so applied to discrete probability distributions) and my results tend to confirm the claim in the discrete case. I cannot find any answer in the continuous case though.
In the case of $\ell^p$ spaces on 2 points, its impossible because the entire space of random variables is parameterised by a single number $a\in[0,1]$, $$ X_a=a\delta_0 + (1-a)\delta_1 $$ which has $p$ norm given by (for $p\ge 1$) $$N_a(p) := \|X_a\|_{\ell^p}^p = a^p + (1-a)^p$$ and the graphs of $N_a,N_b$ only intersect at $p=1$, unless $a=b$. Forgetting about the continuity requirement for now, this corresponds to the piecewise constant random variables defined e.g. explicitly on $\Omega = [0,1]$, $$ x_a =2a\mathbb1_{[0,1/2] } + 2(1-a)\mathbb 1_{[1/2,1]}$$ in the sense that $$ \|x_a\|_{L^p[0,1]}^p=N_a(p)$$ But on $[0,1]$, we can (among other things) also choose to shrink/expand the intervals $[0,1/2],[1/2,1]$; we have a whole new dimension of parameters, $$ x_{a,c} := \frac{a}{c}\mathbb1_{[0,c] } + \frac{1-a}{1-c}\mathbb 1_{[c,1]}, \quad a,c\in(0,1)$$ and their norms are given by $$N_{a, c}(p) = c\left(\frac{a}{c}\right)^p+(1-c)\left(\frac{1-a}{1-c}\right)^p$$ and it is possible to find parameter pairs $(a,c),(b,d)$ and exponents $p,q\in(1,2)$ such that $$N_{a, c}(p) < N_{b,d}(p),\quad N_{a, c}(q) > N_{b,d}(q)$$ Here is an interactive graph; the shaded areas for $x\in[0,1]$ are the probability distributions and the graphs for $x>1$ are the $L^x$ norms. The orange line is a plot of $\operatorname{sign}(N_{a, c}(x) - N_{b, d}(x))$, showing the sign flip in the region $(1,2)$.
(the graph is more believable for larger values of $p,q$)
If you must insist on continuous functions, you can use approximating sequences of continuous functions that converge in some $L^P$ norm for some large (finite) $P> p+q$, $$ x_{a,c,n} \to x_{a, c} , \quad x_{b, d,n} \to x_{b, d} \quad (n\to\infty)$$
Then since by Holder's, if $P_0<P$ then $$\|u\|_{L^{P_0}} \le \|u\|_{L^P}$$ the above convergences hold in $L^p$ and $L^q$ as well, and they converge at least as quickly as they do in $L^P$. This implies that we can simultaneously control the errors in their norms $$ |\|x_{a,c,n}\|_{L^p} - N(a, c)(p)^{1/p}| <\epsilon $$ $$ |\|x_{a,c,n}\|_{L^q} - N(a, c)(q)^{1/q}| <\epsilon $$ $$|\|x_{b,d,n}\|_{L^p} - N(b, d)(p)^{1/p}| <\epsilon $$ $$ |\|x_{b,d,n}\|_{L^q} - N(b, d)(q)^{1/q}| <\epsilon $$ and therefore for $n\gg1$, we again get a sign flip as we vary the exponent.
It should also be false for the discrete case because this should correspond to at worst, a very finely discretised grid.
And after all that, here's a counterexample for the case of random variables that uniformly randomly take 3 values (i.e. $\ell^p$ on 3 points)