Dual space of $L^p$ using existence of minimisers.

419 Views Asked by At

Exercise 14 from Terence Tao's notes on Hilbert spaces(https://terrytao.wordpress.com/2009/01/17/254a-notes-5-hilbert-spaces/) wants me to show that when $1<p<\infty$, the dual space of $L^p(\mu)$ is $L^q(\mu)$ where $\mu$ is $\sigma-$finite and $q$ is the dual exponent of $p$. He wants me to do this using Exercise 11 and Proposition 1 which are as follows:

Proposition 1. (Existence of minimisers) Let $H$ be a Hilbert space, let $K$ be a non-empty closed convex subset of $H$, and let $x$ be a point in $H$. Then there exists a unique $y$ in $K$ that minimises the distance $\|y-x\|$ to $x$. Furthermore, for any other $z$ in $K$, we have $\hbox{Re} \langle z-y, y-x \rangle \geq 0$.

Exercise 11. Using the Hanner inequalities (Exercise 6), show that Proposition 1 also holds for the $L^p$ spaces as long as $1 < p < \infty$. (The specific feature of the $L^p$ spaces that is allowing this is known as uniform convexity.) Give counterexamples to show that the propsition can fail for $L^1$ and for $L^\infty$.

I tried to do something similar to the proof of the Riesz Representation theorem for the dual of a Hilbert space but have been unable to bring in $L^q$ into the picture. Specifically, let $S$ be the closed subspace of $L^p$ which is the kernel of a given linear functional $\lambda$ on $L^p$. We choose a subspace $M$ such that $L^p$ is the direct sum of $M$ and $S$. Now, $M$ has to be one dimensional because else we can find a non zero combination of linearly independent vectors in $M$ such that $\lambda$ assigns them the value $0$. I have no idea if anything can be done after this.

Another thing I tried was to consider the linear transformation $T:L^q\rightarrow(L^p)^*$ which maps $g$ to the linear functional $\lambda_g(f)=\int fgd\mu$. Now, we let $S$ be the closed subspace $T(L^q)$. Proposition 1 says that to any continuous linear functional $\lambda$, there exists a closest functional in $S$. Hence, I need to show that if I have a specific $\lambda_g$ as an approximation to $\lambda$, I can always make the approximation a bit better(hence weaker than showing that $S$ is dense) by considering some $\lambda_h$ such that $h$ differs from $g$ on atleast a set of measure zero, to qualify as a different $L^q$ function. Hence, I need to somehow modify $g$ to reduce the quantity $\text{sup}\{|\lambda(f)-\int fgd\mu|:\|f\|=1\}$, but I am unable to tackle the problem that if I modify $g$ keeping some specific $f$ in mind, another $f$ might increase the value to keep the supremum from decreasing.

1

There are 1 best solutions below

2
On

Here is the idea you shall follow. Given any $\Lambda \in (L^p(\mu))'$. We need to fix some $f_1\in L^p(\mu)\backslash Ker{\Lambda}$. Since $L^p(\mu)$ is uniformly convex, we can find a minimizer $h_0\in Ker{\Lambda}$ such that $$ \|h_0-f_1\|_p\leq \|f-f_1\|_p\quad \forall\, f\in Ker{\Lambda}. $$ (see result below) Then, we have $$ \langle |h_0-f_1|^{p-2}(h_0-f_1),f \rangle =0\,\, f\in Ker{\Lambda} $$ because $\varphi(t)=\|h_0-f_1+tf\|$ attains minimum at $t=0$. We put $g_1=|h_0-f_1|^{p-2}(h_0-f_1)$, which belongs to $L^q$, and recall that for any $f \in L^p(\mu)$, we can find $\alpha$ and $f_2\in Ker(\Lambda)$ such that $$ f=\alpha (h_0-f_1)+ f_2 \,\,\, \alpha=\frac{\Lambda f}{\Lambda (h_0-f_1)} $$ Then we have $$ \int f \bar{g_1}d\mu= \int f_2 \bar{g_1}+\alpha\int |h_0-f_1|^p=\alpha\int |h_0-f_1|^p, $$ which implies that $$ \Lambda f= \frac{\Lambda(h_0-f_1)}{\|h_0-f_1\|_p^p}\int f \bar g_1d\mu. $$ Therefore $g= \frac{\Lambda(h_0-f_1)}{\|h_0-f_1\|_p^p} \bar g_1$ satisfies $$ \Lambda f =\int f \bar{g_1}$$ This completes the proof.

A normed space is called uniformly convex if for any two unit vectors satisfying $\|x-y\|\geq \epsilon$ for some $ε ∈ (0, 1)$, there exists some θ ∈ (0, 1) depending on ε only such that |(x + y)/2| ≤ 1 − θ.

enter image description here