Inequality on Functions of Bounded Variations

315 Views Asked by At

The following discussions were based on Schwabik's book, entitled "Generalized Ordinary Differential Equations", pp.26-29.

Let $\delta$ be a positive function defined on $[a,b]$. We say that $D=\{([\alpha_{j-1},\alpha_j],t_j)\}_{j=1}^k$ is a $\delta$-fine partition of $[a,b]$ if $D$ is a partition of $[a,b]$ and for each $j=1,\dots,n$, $$t_j\in [\alpha_{j-1},\alpha_j]\subset (t_j-\delta(t_j),t_j+\delta(t_j)).$$

Let $f,g:[a,b]\to \mathbb{R}$ be functions which are of bounded variations on $[a,b]$. Then $f,g$ are regulated. For each $\epsilon>0$, we put $$N=\{t\in(a,b):|f(t^+)-f(t)|\geq \epsilon, |f(t)-f(t^-)|\geq \epsilon\}$$ where $f(t^+)$ and $f(f^-)$ are the one sided limits of $f$. Then $N$ is finite. For each $t\in [a,b]$, there is $\delta_1(t)>0$ such that if $x\in(t,t+\delta_1(t))$ then $$|f(x)-f(t^+)|<\epsilon \mbox{ and }|g(x)-g(t^+)|<\epsilon$$ and if $x\in(t-\delta_1(t),t)$ then $$|f(x)-f(t^-)|<\epsilon \mbox{ and }|g(x)-g(t^-)|<\epsilon.$$

For each $t\in [a,b]$, we set $$\delta_2(t)=\mbox{dist}(t,N) \mbox{ for $t\notin N$}$$ and $$\delta_2(t)=\delta_1(t)\mbox{ for $t\in N$}.$$

The symbol $\mbox{dist}(t,N)$ stands for the distance of the point $t$ from the set $N$. We define a positive function $\delta$ on $[a,b]$ as follows: For each $t\in [a,b]$, set $\delta(t)=\mbox{min}\{\delta_1(t),\delta_2(t)\}.$ Now, let us suppose that $D=\{([\alpha_{j-1},\alpha_j],t_j)\}_{j=1}^k$ is a $\delta$-fine partition of $[a,b]$.

Question: If we assume that $\alpha_{j-1}<t_j<\alpha_j$ whenever $t_j\in N$, then why is it that

$$\sum_{j=1}^k |f(t_{j}^{+})-f(t_j)|\cdot |g(t_{j}^{+})-g(\alpha_j)|+ \sum_{j=1}^k |g(t_{j}^{+})-g(t_j)|\cdot |f(t_{j}^{+})-f(\alpha_j)| + $$ $$+\sum_{j=1}^k |f(t_{j}^{+})-f(\alpha_j)|\cdot |g(t_{j}^{+})-g(\alpha_j)|+ \sum_{a\leq t<b, t\notin N} |f(t^{+})-f(t)|\cdot |g(t^{+})-g(t)|$$ $<\epsilon TV(f,[a,b])+ 3\epsilon TV(g,[a,b])$?

Any tips on how to prove it are very much appreciated. Thanks.

1

There are 1 best solutions below

3
On BEST ANSWER

The inequality is false as stated. Let $[a,b]=[0,1]$ and $$ f(x) = \begin{cases}0 & x\leq 1/2 \\ (1-\eta)\epsilon & x>1/2 \end{cases} \qquad g(x) = \begin{cases}0 & x\leq 1/2 \\ 1 & x>1/2 \end{cases} $$ where $0 < \eta \ll 1$. Then $N = \varnothing$, and $\{([\alpha_{j-1},\alpha_j],t_j)\}_{j=1}^k$ given by $k=2$, $\alpha_j = j/2$, $t_1 = t_2 = 1/2$ is a $\delta$-fine partition (note that $\delta(1/2)$ can be arbitrarily large), and we find that \begin{align} &\mathrel{\phantom{=}}\sum_{j=1}^k \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(t_{j}+)-g(\alpha_j) \rvert + \sum_{j=1}^k \lvert g(t_{j}+)-g(t_j) \rvert \cdot \lvert f(t_{j}+)-f(\alpha_j) \rvert \\ &\qquad\quad + \sum_{j=1}^k \lvert f(t_{j}+)-f(\alpha_j) \rvert \cdot \lvert g(t_{j}+)-g(\alpha_j) \rvert + \sum_{a\leq t<b, t\notin N} \lvert f(t+)-f(t) \rvert \cdot \lvert g(t+)-g(t) \rvert \\ &= 4 \lvert f(\tfrac12+) - f(\tfrac12) \rvert \cdot \lvert g(\tfrac12+) - g(\tfrac12) \rvert \\ &= 4 (1-\eta) \epsilon \\ &> (1-\eta) \epsilon^2 + 3\epsilon \\ &= \epsilon TV(f) + 3\epsilon TV(g) \end{align} when $\epsilon$ and $\eta$ are sufficiently small.


I presume that you're trying to follow the proof of Corollary 1.23 in which your inequality is a step. There are two ways to salvage this. One would be to merge the two neighboring intervals $[\alpha_{j-1},\alpha_j]$ and $[\alpha_j, \alpha_{j+1}]$ whenever $t_j = t_{j+1}$, as demonstrated on page 29 during the justification of the assumption that $\alpha_{j-1} < t_j < \alpha_j$ whenever $t_j \in N$. Doing so will allow you to assume that $t_1 < t_2 < \dotsb < t_k$. With this additional assumption I believe $\epsilon TV(f) + 3\epsilon TV(g)$ is a valid bound. The proof is a bit tricky.

The other, which I prefer, would be to prove a different bound. As is evident from the last paragraph of page 29, what's required is not the specific bound $\epsilon TV(f) + 3\epsilon TV(g)$, but rather a bound of the form $C\epsilon$ where $C>0$ is a constant independent of $\epsilon$ and the partition. I found it significantly easier to prove the weaker bound $\epsilon TV(f) + 4\epsilon TV(g)$.

The first step for either of these two approaches would be to write out what it means for $\{([\alpha_{j-1},\alpha_j], t_j)\}_{j=1}$ to be a $\delta$-fine partition for the particular $\delta$ given in the question; this should give you bounds on some of the factors appearing in the four sums as long as a certain condition is met. Then think about what can be done when that condition isn't satisfied.


In more detail, for reference (don't look yet if you're looking to prove things yourself):

For each $j$, either $t_j < \alpha_j$ or $t_j = \alpha_j$. Consider the two cases separately:

  • $t_j < \alpha_j$. Since $D$ is $\delta$-fine, $\alpha_j \in (t_j, t_j + \delta_1(t_j))$. It follows from the definition of $\delta_1$ that $\lvert f(\alpha_j) - f(t_j^+)\rvert < \epsilon$ and $\lvert g(\alpha_j) - g(t_j^+)\rvert < \epsilon$.
  • $t_j = \alpha_j$. Then by assumption $t_j \notin N$ so $ \lvert f(\alpha_j) - f(t_j^+)\rvert = \lvert f(t_j) - f(t_j^+)\rvert < \epsilon. $

It follows immediately that the second and the third sums are each bounded by $\epsilon TV(g)$.

That the fourth sum is also bounded by $\epsilon TV(g)$ follows from the definitions of infinite sums, total variation, and $N$.

It remains to consider the first sum. We split the sum into the two cases $t_j < \alpha_j$ and $t_j = \alpha_j$ from before: \begin{multline} \sum_{j=1}^k \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(t_{j}+)-g(\alpha_j) \rvert \\ \begin{split} &= \sum_{\substack{j \in \{1, \dotsc, k\} \\ t_j < \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(t_{j}+)-g(\alpha_j) \rvert + \sum_{\substack{j \in \{1, \dotsc, k-1\} \\ t_j = \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(t_{j}+)-g(\alpha_j) \rvert \\ &< \epsilon \sum_{\substack{j \in \{1, \dotsc, k\} \\ t_j < \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert + \sum_{\substack{j \in \{1, \dotsc, k-1\} \\ t_j = \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(\alpha_{j}+)-g(\alpha_j) \rvert. \end{split} \end{multline} To complete the proof of the weaker bound, note that the above is bounded by $\epsilon TV(f) + \epsilon TV(g)$.

To obtain the original bound, on the other hand: Observe that my additional assumption $t_j < t_{j+1}$, together with the $\delta$-fineness of $D$, implies that if $\alpha_j = t_j$, then $\alpha_j \in (t_{j+1} - \delta_1(t_{j+1}), t_{j+1})$, and in particular $\lvert g(\alpha_j) - g(t_{j+1}-) \rvert < \epsilon$. Hence \begin{multline} \sum_{\substack{j \in \{1, \dotsc, k-1\} \\ t_j = \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert \cdot \lvert g(\alpha_j+)-g(\alpha_j) \rvert \\ \begin{split} &\leq \sum_{\substack{j \in \{1, \dotsc, k-1\} \\ t_j = \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert \cdot \bigl(\lvert g(\alpha_j+)-g(t_{j+1}-) \rvert + \lvert g(t_{j+1}-)-g(\alpha_j) \rvert \bigr) \\ &< \epsilon \sum_{j=1}^k \lvert g(\alpha_{j-1}+)-g(t_j-) \rvert + \epsilon \sum_{\substack{j \in \{1, \dotsc, k-1\} \\ t_j = \alpha_j}} \lvert f(t_{j}+)-f(t_j) \rvert. \end{split} \end{multline} The sum of the first and the third sums from the question is therefore bounded as follows: \begin{multline} \sum_{j=1}^k \lvert f(t_j+)-f(t_j) \rvert \cdot \lvert g(t_j+)-g(\alpha_j) \rvert + \sum_{j=1}^k \lvert f(t_j+)-f(\alpha_j) \rvert \cdot \lvert g(t_j+)-g(\alpha_j) \rvert \\ \begin{split} &< \epsilon \sum_{j=1}^k \lvert f(t_{j}+)-f(t_j) \rvert + \epsilon \sum_{j=1}^k \bigl( \lvert g(\alpha_{j-1}+)-g(t_j-) \rvert + \lvert g(t_j+)-g(\alpha_j) \rvert \bigr) \\ &\leq \epsilon TV(f,[a,b]) + \epsilon \sum_{j=1}^k TV(g,[\alpha_{j-1},\alpha_j]) \\ &= \epsilon TV(f,[a,b]) + \epsilon TV(g,[a,b]). \end{split} \end{multline}