Proof of Lemma 3.2 in Brezis, why is it correct?

370 Views Asked by At

Lemma 3.2, pg 64, Functional Analysis, Brezis:

Let $X$ be a vector space, let $\varphi, \ldots, \varphi_k$ be $k+1$ linear functionals on $X$, such that forall $u \in X$ $$ [\varphi_i(u)=0, \forall i=1, \ldots, k] \Rightarrow \varphi(u) =0 $$ Then exists constants $\lambda_i$ such that $\varphi = \sum \lambda_i \varphi_i$.

The proof:

Consider the map $F:X \rightarrow \mathbb{R}^{k+1}$, $F(u)=((\varphi(u), \ldots, \varphi_k(u))$. It follows $a=[1,0,\ldots, 0, ] \notin R(F)$. Thus one can strictly separate $\{a\}$ and $R(F)$ by some hyperplane.

The version of strict separation (pg7) in the book requires two non empty convex set $A$ , $B$ such that $A$ is compact and $B$ is closed. It is unclear to me how this is satisfied.

2

There are 2 best solutions below

0
On BEST ANSWER

We know that $a \notin {\cal R} F$. Note that ${\cal R} F$ is a closed subspace hence convex and $\{a\}$ is a compact convex set, hence there is some functional $l$ that separates $\{a\}$ and ${\cal R} F$.

We can assume that $l(a) >0$ and $l(x) \le 0 $ for all $x \in {\cal R} F$. Since ${\cal R} F$ is a subspace, we have $l(x) = 0$ for all $x \in {\cal R} F$.

Since $l(e_1) >0$ we see that $l=(l_1,....,l_{k+1})$ has $l_1 >0$.

Since $l(F(u)) = 0$ for all $u \in X$, we see that $\phi(u) = {1 \over l_1}\sum_{i=2}^{k+1} l_i \phi_i(u)$, or $\phi(u) = {1 \over l_1}\sum_{i=2}^{k+1} l_i \phi_i$, which is the desired result (letting $\lambda_i = {l_{1+i} \over l_1}$).

2
On

Assume that $\{\varphi_1, \ldots, \varphi_k\}$ is linearly independent. After noticing that $a \notin R(F)$, you can proceed like this:

Clearly then $R(F) \ne \mathbb{R}^{k+1}$ so there exists a nontrivial linear functional on $\mathbb{R}^{k+1}$ which vanishes on $R(F)$ (this follows from Hahn-Banach theorem, for example).

Hence, there exists $(\lambda, \lambda_1, \ldots, \lambda_n) \in \mathbb{R}^{k+1} \setminus \{0\}$ such that

$$\lambda v + \sum_{i=1}^k \lambda_i v_i = 0, \quad\forall (v, v_1, \ldots, v_k) \in R(F)$$

which implies

$$\lambda \varphi(x) + \sum_{i=1}^k\lambda_i \varphi_i(x) = 0, \forall x \in X \implies \lambda \varphi = \sum_{i=k}^n \lambda_i\varphi_i$$

Since $\{\varphi_1, \ldots, \varphi_k\}$ is linearly independent, it cannot be $\lambda = 0$ so we have $\varphi = -\sum_{i=k}^n \frac{\lambda_i}{\lambda}\varphi_i$.

If $\{\varphi_1, \ldots, \varphi_k\}$ is not linearly independent, assume that $\{\varphi_1, \ldots, \varphi_r\}$ is linearly independent, and $\varphi_{r+1}, \ldots, \varphi_k$ are their linear combinations. Then $ \bigcap_{i=1}^r \ker \varphi_i \subseteq \ker \varphi_{j}$ for all $j=r+1, \ldots, k$ so in fact $\bigcap_{i=1}^r \ker \varphi_i = \bigcap_{i=1}^k \ker \varphi_i$. Therefore $\varphi$ is a linear combination of $\varphi_1, \ldots, \varphi_r$.

Edit:

Clearly $\bigcap_{i=1}^n \ker\varphi_{i} \subseteq \bigcap_{i=1}^r \ker\varphi_{i}$. Conversely, since $\bigcap_{i=1}^r \ker \varphi_i \subseteq \ker \varphi_j$ for $j = r+1, \ldots, n$, we have

$$\bigcap_{i=1}^r \ker \varphi_i \subseteq \bigcap_{i=1}^r \ker \varphi_i \cap \ker \varphi_{k+1} \cap \cdots \cap \ker\varphi_{r} = \bigcap_{i=1}^n \ker\varphi_{i}$$

Thus $\bigcap_{i=1}^n \ker\varphi_{i} =\bigcap_{i=1}^r \ker\varphi_{i}$.

Now the assumption is $\ker \varphi \subseteq \bigcap_{i=1}^n \ker\varphi_{i} = \bigcap_{i=1}^r \ker\varphi_{i}$ and $\{\varphi_1, \ldots, \varphi_r\}$ is linearly independent so the first part of the proof implies that $\varphi$ is a linear combination of $\varphi_1, \ldots, \varphi_r$, so in particular $\varphi$ is a linear combination of $\varphi_1, \ldots, \varphi_n$.