Trying to understand an application of Hahn Banach extension theorem

391 Views Asked by At

In my lectures we were proving some features of metric characterisations of non-reflexivity of a Banach space.

Now one of the statements that we proved was that if $X$ is non-reflexive, then $\forall \theta \in (0,1), \ \exists$ a sequence $(x_i)_{i=1}^\infty \subset B_X$ and $(f_i)_{i=1}^\infty \subset B_{X^*}$ such that $f_i(x_j) = \theta$ if $i \leq j$ and is $0$ otherwise.

Now the start of the proof apparently uses Hahn Banach, but I am not exactly sure how:

Since $X$ is a proper closed subset of $X^{**}$, then $\exists T\in X^{***}$ such that $T|_X = 0$ and $||T|| = 1$.

I apologise if I have missed a subtlety in one of the many versions of the Hahn Banach extension I have come across, but as far as I can tell the one that has been applied is:

For a real vector space $X$ and a subset $Y$, then given $g\in Y^*, \exists \ f \in X^*$ such that $f|_Y = g$ and $||f||=||g||$

It is the last part that is confusing me. It seems that the linear map $T\in X^{***}$ in the proof has been chosen to restrict to the zero map on $X$, but then the norm should be $0$ also by the Hahn Banach. I have not seen a version of the Hahn Banach that would allow us to extend a zero linear map to some non-zero normed map on the whole space.

EDIT:

As it was requested, I am giving the whole theorem and proof given in lectures below. This was taught as part of a course on metric embeddings, and was in the section on the Ribe program. The ultimate goal in this section was to obtain a metric characterisation of superreflexivity.

Note that I am in fact still puzzled by the original question. A comment pointed to a text which I do not have access to. The fundamental issue is that the application of Hahn Banach seems to be extending the zero map defined on the proper subspace $X\subset X^{**}$ (for non-reflexive X). But the zero map has vanishing norm. And as far as I can see, every Hahn Banach extension preserves the norm. Here goes with the theorem anyway...

Theorem 1: Let $X$ be a Banach space. The following are eqivalent:

i)$X$ is non-reflexive

ii) $\forall \theta \in (0,1), \exists (x_i)_{i=1}^{\infty} > \subset B_X$ and $(f_i)_{i=1}^{\infty} \subset B_{X^*}$ such that $f_i(x_j) = \theta, i\leq j$ and is $0$ otherwise.

iii) $\exists \theta \in (0,1)$ such that the above holds.

iv) $\forall \theta \in (0,1), \ \exists \ (x_i)_{i=1}^\infty \in B_X$ such that $\forall n\in \mathbb{N}, d(conv\{x_1,...,x_n\}, conv\{x_{n+1},...\})\geq \theta$. 'conv' denotes the convex hull.

v) $\exists \theta$ such that the above holds.

Proof:

The proof makes use of Hahn Banach extension theorem and heavy use of the following lemma 2. We also use lemma 3 and 4 at the end. The proof of lemma 2 just uses Hahn Banach, and that of lemma 3 is easy and of lemma 4 uses Goldstine and Banach Alaoglu theorem.

Lemma 2:

Let $\phi \in X^{**}, ||\phi ||<M$ and $E\subset X^* , \dim E < \infty$; then $\exists x \in X$ such that $\hat{x}(f)=\phi(f) \forall f \in E$ and $||x||<M$

Lemma 3:

For $C$ a convex subset of a Banach space $X$, $C$ is $||.||$ closed (norm closed) iff is w closed (weak topology)

Lemma 4:

$X$ is reflexiive iff $(B_X, w)$ is compact

First we show i)$\implies$ ii). This uses argument by induction.

First, (and this is the claim I do not understand) by Hahn Banach, we have a linear map $T\in X^{***}$ such that $T|_X = 0 , ||T|| = 1$. Then, fix $\theta \in (0,1)$. We have a $\phi \in B_{X^{**}}$ and specifically $||\phi |<1$ (in geneal our balls are closed here) such that $\theta < \lambda = T(\phi) \leq ||T||||\phi|| = ||\phi|| <1$

Now since $\theta <||\phi||<1$, we have an $f_1\in B_{X^*}$ such that $\phi(f_1) = \theta$ and morewover using lemma 2, we have an $x_1\in B_X, ||x_1||<1$ which agrees with $\phi$ on $f_1$, that is $f_1(x_1)=\theta$.

Now, assume we have $(x_i)_{i=1}^n $ and $(f_i)_{i=1}^n$ which satisfy the condition in ii), and also that $\phi(f_i)=\theta, \forall i$. recall $T(x_i)=0 \forall i$, $T(\phi)=\lambda$, $||T||=1 < \frac{\lambda}{\theta}$. So now we pply lemma 2 on $T$, i.e. we have a $g\in X*$ such that $||g||<\frac{\lambda}{\theta}$ and $\phi(g) = \lambda, g(x_i) = 0$. So now letting $f_{n+1} = \frac{\theta}{\lambda}g$ gives $\phi(f_{n+1})=\theta$ and $f_{n+1}(x_i)=0 , i\in {1,2,...,n}$. We obtain the required $x_{n+1}$ by application of lemma 2 with $\phi$. i.e. we have an $x_{n+1}\in B_X, ||x||<1$ and $f_{n+1}(x_{n+1}) = \theta$. Continue inductively.

Now ii) $\implies$ iii) is obvious. And iv) $\implies$ v) is also. We show ii) $\implies$ iv) and iii) $\implies v)$ and then v) $\implies$ i). Then we are done.

Fix $\theta \in (0,1)$, whether it is the arbitrary one in ii) or the one claimed to exist in iii). And we have $(x_i)\in B_X$ and $(f_i)\in B_{X^*}$ such that the conditions stated in ii) holds. And take some $n\in \mathbb{N}$. Now take arbitrary finite linear combinations that are in the convex hull i.e. $\Sigma_{i=1}^n t_ix_i$ and $\Sigma_{i=n+1}^\infty t_ix_i$ with $t_i\geq 0$ and $\Sigma t_i = 1$ in each sum, and only finitely many of the $t_i$ non-zero. Then since $||f_{n+1}||<1$ we have

$||\Sigma_{i=1}^nt_ix_i - \Sigma_{i=n+1}^\infty t_ix_i|| \ \geq \ |f_{n+1}(\Sigma_{i=1}^nt_ix_i - \Sigma_{i=n+1}^\infty t_ix_i)| = \Sigma_{n+1}^\infty \theta t_i = \theta$.

In fact, I would use a strict $>$ in the above for a stronger statement but we used $\geq$. Perhaps it has to do with whether we allow infinte sums...

Now for v) $\implies$ i). We assume v) and assume X is reflexive to obtain a contradiction. Let $C_n$ be the convex hull $conv\{x_{n+1},x_{n+2},...\}$ which is a convex subset of $B_X$. Let $\overline{C_n}$ denote the norm closure of $C_n$ (so it is norm-closed and convex). By lemma 3, we thus have that $\overline{C_n}$ is weak-closed $\forall n$.

Now note that $\overline{C_1} \supset \overline{C_2} \supset ...$, the intersection of finitely many of these is non-empty. Thus from the assumption that X is relfexive, and from lemma 4 which gives us that $B_X$ is thus weak-closed, we have that $\cap \overline{C_n} \neq \emptyset$ and hence we have some $x\in \cap \overline{C_n} \subset B_X$.

Now take the $\theta \in (0,1)$ which we obtain from v). Since $x\in \overline{C_1}$ we have some $y\in C_1$ such that $||x-y||<\frac{\theta}{3}$ (this is another point that I'm not seeing, though I suspect it is 'obvious')

Choose $n$ such that $y\in conv\{x_1,x_2,...,x_n\}$. Since $x \in \overline{C_n}$ then $\exists z\in C_n$ such that $||x-z||<\frac{\theta}{3}$. But! Then using the assumption from v) again,

$\theta \leq d(conv\{x_1,...,x_n\}, conv\{x_{n+1},...\}) \leq ||y-z|| < \frac{2\theta}{3}$.

And we are done.

Hope this is interesting to you- I really like the proof; but my problem still remains with the application of Hahn Banach, and the tiny point towards the end with the convex subsets that I put in italics.

EDIT#2

Requested proof for Lemma 2:

Let $f_1,f_2,...,f_n \in X^*$ be a basis for $E$. Let $T:X\rightarrow \mathbb{R}^n$ be the linear functional $T(x)=(f_i(x))_{i=1}^n$. Also, let $C=\{ Tx | ||x||<M\}$

Clearly, T is continuous. First, we show that T is onto. Suppose the contrary. Then $\exists \mathbf{a} = (a_1,a_2,...a_n) \in \mathbb{R}^n\ \{0\}$ such that $T:X\rightarrow \mathbb{R}^n - span(\mathbf{a})$. But by taking the projection onto this subspace, we have $\Sigma_{i=1}^n a_if_i(x) = 0 \ \forall x\in X$ i.e. $\Sigma_{i=1}^n a_if_i = 0$ which contradicts $\{f_i\}$ being a basis.

Since $T$ is a continuous, linear, surjective map between Banach Spaces, we can use the Open Mapping Theorem to say that $C$ is open in $\mathbb{R}^n$. Moreover, $C$ is clearly convex. Now we want to show that $(\phi(f_i))_{i=1}^n \in C$. Assume the converse. We use the -disjoint point and open convex set- version of Hahn Banach separation theorem to say that $\exists g\in S_{\mathbb{R}^{n*}}$ such that $g(\phi(f_i))_{i=1}^n > g(\mathbf{y}) \forall \mathbf{y}\in C$. That is, letting $g=(b_1,...,b_n) \neq \mathbf{0}$, we have $\Sigma b_if_i(x) < \Sigma b_i\phi(f_i) = \phi(\Sigma b_if_i) \forall ||x||<M$

Now we take norms. Regarding $\Sigma b_if_i$ as a linear operator $X\rightarrow \mathbf{R}$, and noting that $||\Sigma b_if_i|| := sup\{|\Sigma b_if_i(x)| : x\in S_X\}$, we must have $||\Sigma b_if_i||M \leq ||\phi||||(\Sigma b_if_i)||$ and since $(\Sigma b_if_i)\neq \mathbf{0}$ we have $M\leq ||\phi||$, a contradiction.

1

There are 1 best solutions below

0
On BEST ANSWER

We are assuming that $X$ is not reflexive. Then $J(B_X)$ is not dense in $B_{X^{**}}$ and hence $J(X)$ is not dense in $X^{**}$ (see Remark 15, chapter 3 of Brezis). Then, by Corollary 1.8 of the same book, there exists such a $T$.

Regarding your second question: Recall that if $x \in \overline{C_1}$ then $d(x, C_1) = 0$.