Doubt about the optimal constant in a functional inequality

62 Views Asked by At

Set $u\in L^2(\mathbb{R}^N)$ and define $F(x)=\vert u (x)\vert ^2 x$.By the divergence theorem, $$ \int_{\mathbb{R}^N} div\ F(x)\ dx = 0 \Rightarrow \int_{\mathbb{R}^N} (\overline{u}(x)\nabla u(x) + u(x)\nabla\overline{u}(x) + N |u(x)|^2 )\ dx = 0 $$ equivalently, $$ N\int_{\mathbb{R}^N}|u(x)|^2\ dx = -2 \text{Re} \int_{\mathbb{R}^N}(\overline{u}(x)\nabla u(x))\cdot x\ dx $$ Taking module and applying Hölder inequality one gets $$ N\int_{\mathbb{R}^N}|u(x)|^2\ dx \leq 2 \left( \int_{\mathbb{R}^N} \vert u(x)\vert ^2 \vert x \vert ^2 \ dx\right)^{1/2}\left( \int_{\mathbb{R}^N} \vert \nabla u(x)\vert ^2 \ dx\right)^{1/2} $$ that is, $$ \int_{\mathbb{R}^N}|u(x)|^2\ dx \leq \frac{2}{N} \left( \int_{\mathbb{R}^N} \vert u(x)\vert ^2 \vert x \vert ^2 \ dx\right)^{1/2}\left( \int_{\mathbb{R}^N} \vert \nabla u(x)\vert ^2 \ dx\right)^{1/2} $$ So I want to find a function or functions where the equality holds in the above inequality. From the proof, it seems that I must require that $\overline{u}\nabla u = u\nabla \overline{u}$ (this is the same to require that $\overline{u}(x)\nabla u(x) \in \mathbb{R}$) and that the equality holds when applying Hölder inequality, that is $\vert u(x)\vert^2\vert x \vert ^2 = \lambda |\nabla u(x)|^2$ a.e. on $\mathbb{R}^N$ for some $\lambda\in\mathbb{R}$.

Am I wrong or missing something?

Thanks in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

I don't know what is going on with that chat.

Anyway, good find, Gaussians $u(x)=Ce^{-|\mu x|^2}$, with $C\in\mathbb C, \mu\in\mathbb R$, extremize that inequality. They are also solutions to the differential equation $|u(x)|^2|x|^2=\lambda |\nabla u(x)|^2$, so my concerns were unfounded.

I think that Gaussians are the only radial and smooth maximizers, because the differential equation reduces, in the radial case, to $$r^2u^2=\lambda (u')^2, $$ and the inequality is scale invariant, so we can apply a scaling transformation to reduce to the $\lambda=1$ case. In other words, up to scaling, all maximizers must satisfy the differential equation with $\lambda=1$. And since the maximizers are assumed to be smooth, it must be the case that $u'(0)=0$. We now have a single first-order differential equation with one boundary condition: thus, there is only one solution.

Thus, the only radial and smooth maximizer is, up to scaling, a rescaled Gaussian.

EDIT. This inequality is exactly the uncertainty inequality of Fourier analysis:

https://en.wikipedia.org/wiki/Fourier_transform#Uncertainty_principle

In dimension $1$, the only extremizers are Gaussians, I think that the same thing holds in all dimension.