Best Constant for Sobolev-Type Inequality

200 Views Asked by At

I am currently reading Del Pino & Dolbeault's paper Best constants for Gagliardo–Nirenberg inequalities and applications to nonlinear diffusions about optimal constants of GNS inequalities.

The author wants to prove the following GNS inequality: $$\lVert w\rVert_{2p}\leq A\lVert \nabla w\rVert_{2}^{\theta}\lVert w\rVert_{p+1}^{1-\theta},$$ where $\theta$ is given and $A$ is the best possible constant.

In order to prove this, he states another theorem (Theorem 4 in the paper), which is actually equivalent to the theorem involving the GNS inequality above. This theorem now is about minimization of a certain linear functional with an additional constraint. The linear functional is: $$G(w)=\frac{1}{2}\int_{\mathbb{R}^{d}}|\nabla w|^2dx+\frac{1}{p+1}\int_{\mathbb{R}^{d}}|w|^{p+1}dx$$ and the integral constraint is: $$\frac{1}{2p}\int_{\mathbb{R}^{d}}|w|^{2p}dx=J_{\infty},$$ where $J_{\infty}$ is given.

The proof of this theorem and the way it implies the GNS inequality are fine for me.

My question is rather why exactly the author wants to minimize the above linear functional in a first place. Why exactly this approach? I know that it is hard to guess what someone else was thinking, but maybe someone can explain me the intuition behind this idea. What does the above GNS inequality have to do with the minimization of $G(w)$?

Is this kind of argument also helpful in order to prove other type of inequalities or does this only work here?

http://capde.cmm.uchile.cl/files/2015/06/pino2002.pdf

1

There are 1 best solutions below

2
On

First observe that the GNS inequality $$ \lVert w \rVert_{2p} \leq A \lVert \nabla w \rVert_2^{\theta} \lVert w \rVert_{p+1}^{1-\theta} \tag{$\dagger$}$$ is equivalent to $$ \frac1{2p}\int_{\Bbb R^d} |\nabla w|^{2p}\,\mathrm{d}x \leq \frac{A}2 \int_{\Bbb R^d} |\nabla w|^2 \,\mathrm{d}x + \frac{A}{p+1} \int_{\Bbb R^d} |w|^{p+1} \,\mathrm{d}x \tag{$\ddagger$} $$ with the same constant $A.$ Indeed $(\dagger) \implies (\ddagger)$ follows by Young's inequality, while the converse follows by considering ($\ddagger$) with $\lambda w$ and extremising in $\lambda.$ This is largely what the authors are doing in the proof of Theorem 1, though they use a different scaling (which may be necessary to get the optimal constant, but I haven't checked the details).

Now to prove the assertion, the second form ($\ddagger$) is more convenient as is in some sense linear. We will once again use the fact that the problem is homogeneous under scaling to see that it is equivalent to proving that $$ \frac{K}{A} \leq G(w) = \frac12 \int_{\Bbb R^d} |\nabla w|^2 \,\mathrm{d}x + \frac1{p+1} \int_{\Bbb R^d} |w|^{p+1} \,\mathrm{d}x, $$ subject to the constraint $$ \frac1{2p} \int_{\Bbb R^d} |\nabla w|^{2p} \,\mathrm{d}x = K. $$ Thus we have shown that determining the sharp constant is equivalent to minimising $G(w)$ subject to the above constraint. The advantage of this method is that we can now show that there exists a $w$ which attains this minimum.

Why is this useful? In general determining optimal constants can be rather difficult, as for a Sobolev-type inequality you can imagine it would be very lengthy to directly prove a particular constant works. However by converting it to a variational problem one can show (by general methods) that a minimiser exists, and use symmetry properties and differential constraints to determine the structure of any such minimiser. Further if we can determine it uniquely (up to symmetry), then we can use it to compute the constant $A.$

Not sure if this entirely answers your doubts, but this technique of finding optimal constants by transforming it into a minimisation problem is fairly common.