Claim: Let $f,g \in \mathcal{C}^1[a,b]$ be such that:
- $f(a) = f(b) = g(a) = g(b) = 0$
- $g(x) > 0$ on $(a,b)$
- $g'(a) > 0$
- $g'(b) < 0 \newcommand{\o}{\Omega} \newcommand{\oo}{\overline{\Omega}} \newcommand{\pp}{\partial_{\mathbf{n}}} \newcommand{\p}{\partial} \newcommand{\CC}{\mathcal{C}} $
Then $\exists k > 0$ whereby $f(x) \le k \cdot g(x)$ for all $x \in [a,b]$.
Context: This claim arises as Lemma $3.7$ in Mingxin Wang's Nonlinear Second Order Parabolic Equations (ISBN-10: 0367711982). The original claim and proof are made for $n$-dimensional space; I'm just trying to intuit my way around the one-dimensional case for now.
Below, define $\partial_{\mathbf{n}} f$ as follows: $\mathbf{n}$ represents the outward normal vector for $f$, in the case of $\partial \Omega$ (where $\Omega \subseteq \mathbb{R}^n$ is a bounded domain); then $\partial_{\mathbf{n}} f$ represents the derivative with respect to this vector.
The claim and proof from the text go (essentially) as so:
Lemma $3.7$: Let $\o$ be of class $\CC^1$, let $u,v \in \CC^1(\oo)$ be such that
- $u,v \equiv 0$ on $\p \o$
- $v(x) > 0$ on $\o$
- $\pp v < 0$ on $\p \o$
Then $\exists k > 0$ where $u(x) \le k \cdot v(x)$ in $\oo$.
Proof (quoted verbatim): Owing to $u,v \in \CC^1(\oo)$ and $\pp v < 0$ on $\p \o$, there exists $k_1 > 0$ for which $$ \pp u - k_1 \pp v > 0 $$ on $\p \o$. Noting that $u - k_1 v = 0$ on $\p\o$, we can find a $\o$-neighborhood $V$ of $\p\o$ such that $u-k_1 \le 0$ in $V$. Because $u,v \in \CC^1(\o\setminus V)$ and $v > 0$ in $\o\setminus V$, there is a positive constant $k_2$ such that $u(x) \le k_2 v(x)$ in $\o\setminus V$.
Take $k := k_1 + k_2$. Then the desired conclusion holds.
A translation to the one-dimensional case would essentially be this:
Lemma $3.7$ (in $\mathbb{R}$): Let $\o = [a,b] = \oo$, let $f,g \in \CC^1[a,b]$ be such that
- $f(x) = g(x) = 0$ for $x \in \{a,b\}$
- $g(x) > 0$ on $(a,b)$
- $g'(a) > 0$ and $g'(b) < 0$
Then $\exists k > 0$ where $f(x) \le k \cdot g(x)$ in $[a,b]$.
Proof: Owing to $f,g \in \CC^1[a,b]$ and the conditions on $g'$, there exists $k_1 > 0$ for which:
- $f'(a) - k_1 g'(a) > 0$
- $f'(b) - k_1 g'(b) < 0$
Interjection: Why does such a $k_1$ exist? And are these the correct derivative conditions in the bullets? (I'm not totally comfortable with the outer normal derivative thing.)
Proof (cont.): Noting that $f(x) - k_1 g(x) = 0$ for $x = a,b$, we can find an $\o$-neighborhood $V$ of $\p \o = \{a,b\}$ where $f(x) - k_1 g(x) \le 0$ in $V$.
Interjection: So essentially we may take a ball of radius $\varepsilon$ at each of $x=a,x=b$, and the text essentially claims that $f(x) - k_1 g(x) \le 0$ in these balls for appropriately small $\varepsilon > 0$. How is this ensured? Why can we never have a positive value instead?
Proof (cont.): Because $f,g \in \CC^1([a,b] \setminus V)$ and $g(x) > 0$ in $(a,b) \setminus V$, then $\exists k_2 >0$ such that $f(x) \le k_2 g(x)$ for all $x \in (a,b) \setminus V$.
Interjection: How is this $k_2$'s existence justified? I'm guessing just take $\inf \{ v(x) \mid x \in (a,b) \setminus V \}$ and scale accordingly, since $v$ is positive? But that seems too easy...
Proof (cont.): Take $k := k_1 + k_2$; then the desired result holds.
Can anyone enlighten me on these details?
Intuitively speaking (I have absolutely no experience with the "outward unit normal", so I may very well be wrong), $\partial_{\mathbf n}f$ is how much $f$ decreases as we go from inside the set to outside (exiting orthogonal to the boundary). So in the $\mathbb R$ case, it looks like at $b$ this is exactly represented by $g'(b)$, but at $a$, this is represented by $-g'(a)$ (because $g'(a)$ measures how $f$ changes as we go from outside, i.e. $x<a$, to inside, i.e. $x>a$; and we want the opposite). In other words, I think you messed up your inequality "$f'(a)-k_1g'(a)>0$" (indeed this is false, since $f'(a)$ could be 0, or worse, negative); it should instead be $f'(a)+k_1g'(a)>0$. So then the proof is as follows:
Proof: because $g'(a)>0$ and $g'(b)<0$, and $f'(a),f'(b)$ are just fixed quantities at this point, then certainly for some large enough $k_1$, we have that $f'(a) < k_1 g'(a)$ and $f'(b) > k_1 g'(b)$. So defining $h:= f-k_1 \cdot g$, we have that $h(a)=h(b)=0$, and $h'(a)<0$ and $h'(b)>0$. Let's just look at $a$; the $b$ case follows similarly. By definition of the derivative, we must have for all $x$ close enough (say within $\delta$) to $a$ that $\frac{h(x)-h(a)}{x-a} = \frac{h(x)}{x-a} < h'(a)+\epsilon$. Choosing $\epsilon>0$ small enough this RHS is still $<0$, and since $x-a>0$, we get that for all $x$ within $\delta$ to $a$ (except perhaps $x=a$), $h(x)<0$.
Ok, so now $h(x)\leq 0 \iff f(x)\leq k_1 \cdot g(x)$ on $\delta$ neighborhoods of the boundary. Now on $[a+\delta,b-\delta]$, a compact set, the continuous functions $f$ and $g$ both attain their extrema, say $g$ attains its minimum at $c$. Because $g>0$ on $(a,b)$, $g(c)>0$, so $g(x)\geq g(c)>0$ on $[a+\delta,b-\delta]$. As $f$ also attains its maximum, say $M$ on this set, we can scale $g(c)>0$ big enough (using $k_2$) s.t. $k_2 g(x)\geq k_2 \cdot g(c) \geq M \geq f(x)$. Thus, $f \leq \max\{k_1,k_2\} \cdot g$ on all of $[a,b]$, and we are done.