Improper integral - equivalent definition?

1k Views Asked by At

Intuitively, it is rather obvious that

$$\lim_{l\to\infty}\sum_{n=-\infty}^{\infty}f(n\Delta x)\Delta x = \int_{-\infty}^{\infty}f(x)dx \tag{1}$$

where $\Delta x = \frac{1}{l}$, assuming $f$ is integrable and the limit exists.

The fact that this equality is true is the core part of deriving Fourier transform from Fourier series, see page 4, eq. 4.7 in this document. Or maybe we cannot consider this derivation as formal, as it was never intended to be formal, but I thought n mathematics there's no place for informal thinking.

My question is how can we prove it's true from the definitions and properties of improper integral, definite integral and limits?

I've listed the important definitions below in case you would like to refer to some of these in your answers.

Oh, and please ignore mrf's answer - it doesn't refer to my question anymore, I've reformulated it.


If function $f$ is integrable on $[a,b]$, then: $$\int_{a}^{b}f(x)dx=\lim_{n\to\infty}\sum_{i=1}^{n}f(x_i)\Delta x \tag{2}$$ where $\Delta x = \frac{b-a}{n}$ and $x_i = a+i\Delta x$.

Improper integral definitions

$$\int_{a}^{\infty}f(x)dx=\lim_{t\to\infty}\int_{a}^{t}f(x)dx \tag{3}$$

$$\int_{-\infty}^{b}f(x)dx=\lim_{t\to-\infty}\int_{t}^{b}f(x)dx \tag{4}$$

$$\int_{-\infty}^{\infty}f(x)dx=\int_{a}^{\infty}f(x)dx + \int_{-\infty}^{a}f(x)dx \tag{5}$$

3

There are 3 best solutions below

6
On BEST ANSWER

This is a possibly overcomplicated proof of the conjectured identity: \begin{equation} \lim_{\delta \to 0+} \sum_{n = -\infty}^\infty f(n\delta)\delta = \int_{-\infty}^\infty f \tag{1}\label{eq:A} \end{equation} for the improper Riemann integral of a function $f: \mathbb{R} \to \mathbb{R}$, based on these hypotheses about $f$:

  1. $f$ is integrable on all finite intervals of $\mathbb{R}$;
  2. $f(x) = O\!\left(\frac{1}{x^2}\right)$, for large $\left\lvert{x}\right\rvert$. $\newcommand{\abs}[1]{\left\lvert#1\right\rvert}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\renewcommand{\phi}{\varphi}$ $\newcommand{\floor}[1]{\left\lfloor#1\right\rfloor}$

By hypothesis, there exist $M, A > 0$ such that $\abs{f(x)} \leqslant M/x^2$ for $\abs{x} \geqslant A$.

$\int_0^\infty f$ exists, by the Cauchy convergence criterion, because, if $A \leqslant a \leqslant b$, $$ \bigg\lvert\int_0^b f - \int_0^a f\bigg\rvert = \bigg\lvert\int_a^b f\bigg\rvert \leqslant \int_a^b \abs{f} \leqslant \int_a^b \frac{M}{x^2}\,dx = M\left(\frac{1}{a} - \frac{1}{b}\right) < \frac{M}{a} $$ and this tends to $0$ as $a$ tends to $\infty$. Similarly for $\int_{-\infty}^0 f$; therefore $\int_{-\infty}^\infty f$ exists.

Also, it is clear that all the sums on the left hand side of $\eqref{eq:A}$ converge, by comparison with $\sum 1/n^2$. But we need precise information on this convergence.

For $\delta > 0$, let $S(\delta) = \sum_{n = -\infty}^\infty f(n\delta)\delta$. There exists a positive real number $N(\delta)$ (which we may take as large as we please) such that: \begin{equation} \bigg\lvert S(\delta) - \!\!\!\!\sum_{\abs{n} \leqslant N} f(n\delta)\delta \bigg\rvert < -\frac{1}{\log\delta} \ \text{ for all } N \geqslant N(\delta). \tag{2}\label{eq:B} \end{equation} The expression $-1/(\log\delta)$ is chosen so that it tends only slowly to $0$ as $\delta \to 0$; any other similarly slowly shrinking function would have done instead.

The size of $N(\delta)$ turns out to be critical, so we estimate it carefully. Certainly we require $N(\delta)\delta > A$ for all $\delta$, in order to use our hypothesis on $f$. Also: $$ \lim_{\delta \to 0+} N(\delta)\delta = +\infty. $$ Both these properties are guaranteed by the choice of $N(\delta)$ that follows, so long as $\delta$ is small enough. (Certainly $\delta < 1$, otherwise $\eqref{eq:B}$ is ill-defined.) If $N\delta \geqslant A$: \begin{gather*} \bigg\lvert \sum_{\abs{n} > N} f(n\delta)\delta \bigg\rvert \leqslant \sum_{\abs{n} > N} \abs{f(n\delta)}\delta \leqslant \frac{2M}{\delta}\!\!\!\sum_{n=N+1}^\infty \frac{1}{n^2} < \frac{2M}{\delta}\!\!\!\sum_{n=N+1}^\infty \frac{1}{n(n - 1)} = \frac{2M}{N\delta}. \end{gather*}

Accordingly, we define $N(\delta)$ by the equation $N(\delta)\delta = -2M\log\delta$.

Consider the change of variables $\phi: (-1, 1) \to \R$, where: $$ \phi(y) = \frac{1}{1 - y} - \frac{1}{1 + y} = \frac{2y}{1 - y^2}, \ \ \phi'(y) = \frac{2(1 + y^2)}{(1 - y^2)^2} \ \ \ (-1 < y < 1). $$

If we write $y_n = \phi^{-1}(n\delta)$ for all $n \in \Z$, then by Taylor's Theorem: \begin{gather*} \sum_{\abs{n} \leqslant N(\delta)} f(n\delta)\delta = \!\!\!\!\sum_{\abs{n} \leqslant N(\delta)} f(\phi(y_n))(\phi(y_{n+1}) - \phi(y_n)) = \\ \sum_{\abs{n} \leqslant N(\delta)} f(\phi(y_n))\phi'(y_n)(y_{n+1} - y_n) + \!\!\!\!\sum_{\abs{n} \leqslant N(\delta)} f(\phi(y_n))\frac{\phi''(y_n^*)}{2}(y_{n+1} - y_n)^2, \end{gather*} for some $y_n^*$ such that $y_n < y_n^* < y_{n+1}$ ($\abs{n} \leqslant N(\delta)$).

The first of these two subexpressions is 'almost' a Riemann sum for the integral $\int_{-1}^{1} f(\phi(y))\phi'(y)\,dy$. Note that the integrand remains bounded at the endpoints, because $\phi'(y) \sim \phi(y)^2$ as $y \to \pm 1$, and therefore, for $y$ close to $\pm 1$, $$ \abs{f(\phi(y))\phi'(y)} \leqslant \frac{M\phi'(y)}{\phi(y)^2} \sim M. $$

Define a function $F: (-1, 1) \to \R$, and numbers $c(\delta), d(\delta) \in (-1, 1)$, by: \begin{align*} F(y) & = f(\phi(y))\phi'(y) \ \ (-1 < y < 1), \\ c(\delta) & = \phi^{-1}(-N(\delta)\delta), \\ d(\delta) & = \phi^{-1}((N(\delta) + 1)\delta)). \end{align*} Then $\lim_{\delta \to 0+} c(\delta) = -1$, $\lim_{\delta \to 0+} d(\delta) = 1$, and, because $F$ is bounded at $\pm 1$, \begin{equation} \int_{-\infty}^\infty f = \lim_{\delta \to 0+} \int_{-N(\delta)\delta}^{(N(\delta) + 1)\delta} f = \lim_{\delta \to 0+} \int_{c(\delta)}^{d(\delta)} F = \int_{-1}^1 F. \tag{3}\label{eq:D} \end{equation}

The desired conclusion $\eqref{eq:A}$ follows from $\eqref{eq:B}$, $\eqref{eq:D}$, and: \begin{equation} \lim_{\delta \to 0+} \sum_{\abs{n} \leqslant N(\delta)} f(n\delta)\delta = \int_{-1}^1 F, \tag{4}\label{eq:F} \end{equation} which we now prove.

We obtained, above, a lengthy expression of the form: $$ \sum_{\abs{n} \leqslant N(\delta)} f(n\delta)\delta = I(\delta) + J(\delta), $$ remarking at the time that $I(\delta)$ is 'almost' a Riemann sum. In fact (here we temporarily denote the integer $\floor{N(\delta)}$ by '$N$' for readability), the expression $$ F(y_{-N})(1 + y_{-N}) + I(\delta) + F(y_{N+1})(1 - y_{N+1}) $$ is a Riemann sum for the partition $(-1, y_{-N}, y_{-N+1}, \ldots, 0, \ldots, y_N, y_{N+1}, 1)$, tagged with values $(y_{-N}, y_{-N}, y_{-N+1}, \ldots, 0, \ldots, y_N, y_{N+1})$. By the continuity of $\phi$, the maximum of the interval lengths $y_{n+1} - y_n$ tends to $0$ with $\delta$; and we have already remarked that $y_{N+1}$ tends to $1$ and $y_{-N}$ to $-1$; and $F$ is bounded.

From the facts just mentioned, it follows that: $$ \lim_{\delta \to 0+} I(\delta) = \int_{-1}^1 F, $$ and so the proof of $\eqref{eq:F}$, and therefore of $\eqref{eq:A}$, reduces to: $$ \lim_{\delta \to 0+} J(\delta) = 0, $$ or in full: $$ \lim_{\delta \to 0+} \sum_{\abs{n} \leqslant N(\delta)} f(\phi(y_n))\frac{\phi''(y_n^*)}{2}(y_{n+1} - y_n)^2 = 0. $$

By our hypotheses, $f$ is integrable, and therefore bounded, on the interval $[-A, A + \delta]$, therefore the factor $f(\phi(y_n))\phi''(y_n^*)$ is bounded for $n \in \Z$ such that: $$ -\phi^{-1}(A) \leqslant y_n < y_n^* < y_{n+1} \leqslant \phi^{-1}(A + \delta), $$ or equivalently, $$ -A \leqslant n\delta < \phi(y_n^*) < (n + 1)\delta \leqslant A + \delta. $$ Such terms therefore contribute at most a fixed multiple of $\sum_n (y_{n+1} - y_n)^2$ to the absolute value of the summation; and because $\lim_{\delta \to 0+} \max_n (y_{n+1} - y_n) = 0$, and $\sum_n (y_{n+1} - y_n) < 2$, this part of the sum tends to $0$ in the limit as $\delta \to 0$.

What now remains to be proved is: \begin{equation} \lim_{\delta \to 0+} \sum_{A/\delta \leqslant \abs{n} \leqslant N(\delta)} f(\phi(y_n))\frac{\phi''(y_n^*)}{2}(y_{n+1} - y_n)^2 = 0. \tag{5}\label{eq:H} \end{equation} For such $n$, we have: $$ \abs{f(\phi(y_n))\frac{\phi''(y_n^*)}{2}} \leqslant \frac{M\abs{\phi''(y_n^*)}}{2\phi(y_n)^2} = \frac{M\abs{\phi''(y_n^*)}}{2n^2\delta^2}. $$ Note that: $$ \phi''(y) = \frac{4(3y + y^3)}{(1 - y^2)^3} \ \ \ (-1 < y < 1). $$ By taking $A$ large enough, we can assume that all values of the argument $y$ under consideration satisfy $1/\sqrt{2} \leqslant \abs{y} < 1$, so that $\abs{y} \leqslant 2\abs{y^3}$, and therefore: $$ \abs{\phi''(y)} \leqslant \frac{28\abs{y}^3}{(1 - y^2)^3} = \frac{7\abs{\phi(y)}^3}{2}. $$ We also have the inequality: $$ \abs{\phi(y_n^*)} \leqslant (\abs{n} + 1)\delta. $$ Putting it all together: $$ \abs{f(\phi(y_n))\frac{\phi''(y_n^*)}{2}} \leqslant \frac{7M\abs{\phi(y_n^*)}^3}{4n^2\delta^2} \leqslant \frac{7M\delta(\abs{n} + 1)^3}{4n^2} = \frac{7M\abs{n}\delta}{4}\left(1 + \frac{1}{\abs{n}}\right)^3. $$ Taking $A$ large enough, we can assume $\abs{n} \geqslant 22$, therefore $\left(1 + \frac{1}{\abs{n}}\right)^3 < \frac{8}{7}$, and: $$ \abs{f(\phi(y_n))\frac{\phi''(y_n^*)}{2}} \leqslant 2M\abs{n}\delta \leqslant 2MN(\delta)\delta = -4M^2\log\delta. $$ Having been careful with our estimates so far, we can afford to be sloppy now! We have $\phi'(y) \geqslant 2$, for all $y$, therefore $y_{n+1} - y_n \leqslant \delta/2$, for all $n$. This and the fact that $\sum_n (y_{n+1} - y_n) < 2$ together imply that the sum in $\eqref{eq:H}$ is bounded above by $-4M^2\delta\log\delta$, which does tend to $0$ with $\delta$. This completes the proof.

6
On

I think this is a counterexample.

Let $g$ be any non-negative function on $\mathbb{R}$ such that the improper Riemann integral $\int_{-\infty}^\infty g$ exists. Let $S = \{ q + (p/q) : q, p = 1, 2, \ldots \}$, and let $f(x)$ be equal to $g(x)$ except on $S$, where $f(x) = 1$.

Since $S$ has only finitely many points in any finite interval, $f$ is continuous and equal to $g$ except on the countable set $S$, therefore $\int_{-\infty}^\infty f$ exists, and is equal to $\int_{-\infty}^\infty g$.

But for $q = 1, 2, \ldots$, the sum $\sum_{n = -\infty}^\infty f(n/q)$ diverges to $+\infty$, therefore the limit on the left hand side of (1) does not exist.

Update: it gets worse, I'm afraid.

One might still reasonably hope that, if $f$ is continuous (which rules out this counterexample as it stands), and $f$ is non-negative, and $\int_{-\infty}^\infty f$ exists, then all the infinite sums on the left hand side of (1) exist, so there's a fair chance that (1) holds under these (arguably not too restrictive) conditions.

Define the countable set $S \subset \mathbb{R}$, in the same way as before. Because $S$ has only finitely many points in any finite interval, it can be arranged as a strictly increasing sequence, $s_1 < s_2 < \ldots$.

Choose a convergent series $\sum_{k=1}^\infty t_k$ such that $t_k > 0$ and $s_k + t_k \leqslant s_{k+1} - t_{k+1}$ ($k = 1, 2, \ldots$).

Let $h: (-1, 1) \to \mathbb{R}$ be a "bump function", such as: $$ h(y) = e^{y^2/(y^2 - 1)} \qquad (-1 < y < 1). $$

Define: $$ f(s_k + yt_k) = \frac{h(y)}{s_k} \qquad (k = 1, 2, \ldots; \ -1 < y < 1), $$ and let $f$ have the value $0$ everywhere outside the pairwise disjoint open intervals $(s_k - t_k, s_k + t_k)$.

Observe that for $l = 1, 2, \ldots$, we have $n/l \in S$ and $f(n/l) = l/n$ for all $n > l^2$, and therefore: $$ \sum_{n = -\infty}^\infty f\left(\frac{n}{l}\right) \geqslant \sum_{n = l^2 + 1}^\infty f\left(\frac{n}{l}\right) = l\sum_{n = l^2 + 1}^\infty \frac{1}{n} = +\infty. $$

Thus: $f$ is smooth everywhere on $\mathbb{R}$; $f(x) \geqslant 0$ for all $x \in \mathbb{R}$; $f(x) \to 0$ as $x \to \pm \infty$; the improper Riemann integral $\int_{-\infty}^\infty f$ exists (it is bounded above by $2\sum_{k=1}^\infty t_k/s_k$, and therefore by $2\sum_{k=1}^\infty t_k$); yet, the inner series on the left hand side of (1) diverges to $+\infty$ for all positive integral values of $l$.

So, even though the parameter $\Delta x$ on the left hand side of (1) may assume any strictly positive value, the series expression under the outer limit sign becomes undefined for arbitrarily small values of $\Delta x$, so the limit itself is not well defined, even for this quite "reasonable" function $f$.

Further update: essentially the same construction, and same argument, with these minor changes: $$ f(s_k + yt_k) = \frac{h(y)}{s_k\log s_k}, \\ \sum_{n = -\infty}^\infty f\left(\frac{n}{l}\right) \geqslant l\sum_{n = l^2 + 1}^\infty \frac{1}{n\log n} = +\infty, $$ shows that (1) fails even for smooth $f$ such that $\int_{-\infty}^\infty f$ exists and $f(x) = O\left(\frac{1}{\left\lvert{x}\right\rvert\log\left\lvert{x}\right\rvert}\right)$ for large $\left\lvert{x}\right\rvert$.

1
On

$\newcommand{\abs}[1]{\left\lvert#1\right\rvert}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\renewcommand{\phi}{\varphi}$ $\newcommand{\floor}[1]{\left\lfloor#1\right\rfloor}$

See this Meta thread for the reason why this answer has been edited after 4 years. The content has not changed.

The counterexample in my first answer (as recently modified) shows that conjecture $\eqref{eq:A}$ fails even for some relatively well-behaved functions $f$ such that $f(x) = O(1/(\abs{x}\log\abs{x}))$ for large $\abs{x}$; whereas my second answer shows that $\eqref{eq:A}$ is true for all functions $f$, integrable on finite intervals, such that $f(x) = O(1/x^2)$.

We now close most, although not all, of the remaining gap between these estimates, by showing that conjecture $\eqref{eq:A}$ is true for all functions $f$, integrable on finite intervals, such that $f(x) = O(1/\abs{x}^{1 + \epsilon})$ for large $\abs{x}$, for some $\epsilon > 0$ (i.e. $f$ is a "function of moderate decrease").

To save space, I'll refer frequently to my second answer, and elaborate the new argument only in the places where it differs significantly from the old one, which I'll no longer bother to update (not even in places where it is untidy!) - except, of course, to correct any remaining errors.

Talking about closing gaps, and correcting errors, the following lemma (copied, with only minor notational alterations, from the answer I posted yesterday to the question "Improper Riemann integral of bounded function is proper integral") is needed to plug the subtle logical gap in my first answer:

Suppose that (i) the function $g: [a, b] \to \R$ is bounded, and (ii) the improper Riemann integral $\int_{a+}^b g = \lim_{\epsilon \to 0+} \int_{a + \epsilon}^b g$ exists. Then the proper Riemann integral $\int_a^b g$ also exists, and it is equal to the improper integral $\int_{a+}^b g$.

Proof:

Let $M$ be any upper bound of $\{\abs{g(x)}: a \leqslant x \leqslant b\}$ such that $2M(b - a) > 1$. For $n = 1, 2, \ldots$, let $\epsilon_n = \frac{1}{2nM}$, and let $P_n$ be a partition of $[a + \epsilon_n, b]$ on which the upper and lower Darboux sums of $g$ both differ from $\int_{a + \epsilon_n}^b g$ by less than $\frac{1}{2n}$. The upper and lower Darboux sums of $g$ on $\{a\} \cup P_n$ both differ from $\int_{a + \epsilon_n}^b g$ by less than $\frac{1}{n}$, so $g$ has a sequence of upper Darboux sums over $[a, b]$ that converges to $\int_{a+}^b g$, and also a sequence of lower Darboux sums over $[a, b]$ that converges to $\int_{a+}^b g$. Hence, $g$ is Riemann integrable on $[a, b]$, and $\int_a^b g = \int_{a+}^b g$. Q.E.D.

The change of variables $x \mapsto a + b - x$ yields the corollary that the existence of $\int_a^{b-} g = \lim_{\epsilon \to 0+} \int_a^{b - \epsilon} g$ implies the existence of $\int_a^b g$, with the same value.

Now for the proof of the main result.

By hypothesis, there exist $M, A > 0$ such that $\abs{f(x)} \leqslant M/\abs{x}^{1 + \epsilon}$ for $\abs{x} \geqslant A$.

The simple proof that $\int_{-\infty}^\infty f$ exists goes through, almost exactly as before.

Define a smooth monotone bijection $\phi: (-1, 1) \to \R$, where, for $y \in (-1, 1)$, \begin{gather*} \phi(y) = \frac{1}{(1 - y)^{\frac{1}{\epsilon}}} - \frac{1}{(1 + y)^{\frac{1}{\epsilon}}}, \\ \phi'(y) = \frac{1}{\epsilon}\left[ \frac{1}{(1 - y)^{\frac{1}{\epsilon} + 1}} + \frac{1}{(1 + y)^{\frac{1}{\epsilon} + 1}} \right], \\ \phi''(y) = \frac{1}{\epsilon}\left(\frac{1}{\epsilon} + 1\right)\left[ \frac{1}{(1 - y)^{\frac{1}{\epsilon} + 2}} - \frac{1}{(1 + y)^{\frac{1}{\epsilon} + 2}} \right]. \end{gather*}

Define $F: [-1, 1] \to \R$ much as before, by putting $F(y) = f(\phi(y))\phi'(y)$ ($-1 < y < 1$), and assigning arbitrary values to $F(-1)$ and $F(1)$. As $y \to \pm 1$, \begin{gather*} \phi(y) \sim \pm \frac{1}{(1 \mp y)^{\frac{1}{\epsilon}}}, \ \ \phi'(y) \sim \frac{1}{\epsilon}\frac{1}{(1 \mp y)^{\frac{1}{\epsilon} + 1}}, \end{gather*} therefore $$ \abs{F(y)} = \abs{f(\phi(y))\phi'(y)} \leqslant \frac{M\phi'(y)}{\abs{\phi(y)}^{1 + \epsilon}} \sim \frac{M}{\epsilon}. $$ So $F$ is bounded on $[-1, 1]$.

By the theorem on change of variable in a Riemann integral (see e.g. Rudin, Principles of Mathematical Analysis, Theorem 6.19), $F$ is Riemann-integrable on any closed subinterval $[c, d]$ of $(-1, 1)$, and $\int_c^d F = \int_{\phi(c)}^{\phi(d)} f$. Therefore, the improper Riemann integral $\int_{(-1)+}^{1-} F$ exists and equals the improper Riemann integral $\int_{-\infty}^\infty f$. But $F$ is bounded on $[-1, 1]$, so the lemma and corollary above imply that the proper Riemann integral $\int_{-1}^1 F$ exists and equals $\int_{-\infty}^\infty f$.

In order to use this result to prove the conjecture $\eqref{eq:A}$, we now have to prove: \begin{equation} \lim_{\delta \to 0+} S(\delta) = \int_{-1}^1 F, \tag{6}\label{eq:J} \end{equation} where, as before: $$ S(\delta) = \sum_{n = -\infty}^\infty f(n\delta)\delta. $$

The positive real number $N(\delta)$ is again defined so as to satisfy the inequality: \begin{equation} \bigg\lvert S(\delta) - \!\!\!\!\sum_{\abs{n} \leqslant N} f(n\delta)\delta \bigg\rvert < -\frac{1}{\log\delta} \ \text{ for all } N \geqslant N(\delta). \tag{2}\label{eq:M} \end{equation} We cannot be quite so precise about the value of $N(\delta)$ this time: it depends on some new constants, whose values we do not attempt to estimate. As before, we require $N(\delta)\delta > A$, and $\lim_{\delta \to 0+} N(\delta)\delta = +\infty$. It is "well known" (for instance from Apostol, Introduction to Analytic Number Theory, Theorem 3.2(c)) that: $$ \sum_{n > N} \frac{1}{n^{1 + \epsilon}} = O\left(\frac{1}{N^\epsilon}\right). $$ That is to say, there exist real $K, B > 0$ such that: $$ \sum_{n > N} \frac{1}{n^{1 + \epsilon}} < \frac{K}{N^\epsilon} \text{ for all } N \geqslant B. $$ (In our first proof, we had $K = B = \epsilon = 1$.) If $N \geqslant B$, and $N\delta \geqslant A$, then: \begin{gather*} \bigg\lvert \sum_{\abs{n} > N} f(n\delta)\delta \bigg\rvert \leqslant \sum_{\abs{n} > N} \abs{f(n\delta)}\delta \leqslant \frac{2M}{\delta^\epsilon}\!\!\!\sum_{n=N+1}^\infty \frac{1}{n^{1 + \epsilon}} < \frac{2MK}{(N\delta)^\epsilon}. \end{gather*} Accordingly, we define $N(\delta)$ by the equation: $$ (N(\delta)\delta)^\epsilon = -2MK\log\delta. $$

From $\eqref{eq:B}$ and $\eqref{eq:J}$, what we now have to prove is, the same as before: \begin{equation} \lim_{\delta \to 0+} \sum_{\abs{n} \leqslant N(\delta)} f(n\delta)\delta = \int_{-1}^1 F. \tag{4}\label{eq:K} \end{equation}

Exactly as before, we use Taylor's Theorem to get an expression of the form: $$ \sum_{\abs{n} \leqslant N(\delta)} f(n\delta)\delta = I(\delta) + J(\delta). $$

The proof that $$ \lim_{\delta \to 0+} I(\delta) = \int_{-1}^1 F $$ is also exactly the same as before. We are reduced, as before, to proving that: $$ \lim_{\delta \to 0+} J(\delta) = 0, $$ or in full: $$ \lim_{\delta \to 0+} \sum_{\abs{n} \leqslant N(\delta)} f(\phi(y_n))\frac{\phi''(y_n^*)}{2}(y_{n+1} - y_n)^2 = 0. $$

The next part of the argument has changed slightly from the earlier version, which is why it has to be repeated in detail:

By our hypotheses, $f$ is integrable, and therefore bounded, on the interval $[-A - \delta, A + 2\delta]$, therefore the factor $f(\phi(y_n))\phi''(y_n^*)$ is bounded for $n$ such that: $$ -\phi^{-1}(A + \delta) \leqslant y_n < y_n^* < y_{n+1} \leqslant \phi^{-1}(A + 2\delta), $$ or equivalently, $$ -A - \delta \leqslant n\delta < \phi(y_n^*) < (n + 1)\delta \leqslant A + 2\delta. $$ Such terms therefore contribute at most a fixed multiple of $\sum_n (y_{n+1} - y_n)^2$ to the absolute value of the summation; and because $\lim_{\delta \to 0+} \max_n (y_{n+1} - y_n) = 0$, and $\sum_n (y_{n+1} - y_n) < 2$, this part of the sum tends to $0$ in the limit as $\delta \to 0$.

What now remains to be proved is: \begin{equation} \lim_{\delta \to 0+} \sum_{(A + \delta)/\delta \leqslant \abs{n} \leqslant N(\delta)} f(\phi(y_n))\frac{\phi''(y_n^*)}{2}(y_{n+1} - y_n)^2 = 0. \tag{$5'$}\label{eq:L} \end{equation} For such $n$, we have $\abs{\phi(y_n)} = \abs{n\delta} \geqslant A$, therefore: $$ \abs{f(\phi(y_n))\frac{\phi''(y_n^*)}{2}} \leqslant \frac{M\abs{\phi''(y_n^*)}}{2\abs{\phi(y_n)}^{1 + \epsilon}}. $$ We also have $\abs{\phi(y_n^*)} \geqslant A$. (That was the reason for the finicky change in the argument: we replaced $A$ with $A + \delta$, in order to get this inequality.) We can suppose that $A \geqslant 1$, therefore $\abs{\phi(y_n^*)} \geqslant 1$.

We now estimate $\abs{\phi''(y_n^*)}$ in terms of $\abs{\phi(y_n^*)}$.

I claim that if $\abs{\phi(y)} \geqslant 1$, then: $$ \abs{\phi''(y)} \leqslant \frac{(1 + \epsilon)2^{1 + 2\epsilon}}{\epsilon^2} \abs{\phi(y)}^{1 + 2\epsilon}. $$ (Obviously this is a much poorer bound than we got before for the case $\epsilon = 1$, but the multiplying constant doesn't matter.)

Proof: $\phi$ and $\phi''$ are odd functions, and $\phi'$ is an even function, so our previous expressions for these functions can be rewritten as: \begin{gather*} \abs{\phi(y)} = \frac{1}{(1 - \abs{y})^{\frac{1}{\epsilon}}} - \frac{1}{(1 + \abs{y})^{\frac{1}{\epsilon}}}, \\ \phi'(y) = \frac{1}{\epsilon}\left[ \frac{1}{(1 - \abs{y})^{\frac{1}{\epsilon} + 1}} + \frac{1}{(1 + \abs{y})^{\frac{1}{\epsilon} + 1}} \right] \geqslant \frac{2}{\epsilon}, \\ \abs{\phi''(y)} = \frac{1}{\epsilon}\left(\frac{1}{\epsilon} + 1\right)\left[ \frac{1}{(1 - \abs{y})^{\frac{1}{\epsilon} + 2}} - \frac{1}{(1 + \abs{y})^{\frac{1}{\epsilon} + 2}} \right]. \end{gather*} (The separate inequality for $\phi'(y)$ will be used shortly.) Therefore: \begin{gather*} \frac{(1 + \abs{y})^{\frac{1}{\epsilon}}} {(1 - \abs{y})^{\frac{1}{\epsilon}}} - 1 = (1 + \abs{y})^{\frac{1}{\epsilon}}\abs{\phi(y)} \geqslant 1, \ \ \therefore\ \frac{(1 + \abs{y})^{\frac{1}{\epsilon}}} {(1 - \abs{y})^{\frac{1}{\epsilon}}} \geqslant 2, \\ \therefore\ \frac{1}{(1 + \abs{y})^{\frac{1}{\epsilon}}} \leqslant \frac{1}{2(1 - \abs{y})^{\frac{1}{\epsilon}}}, \ \ \therefore\ \abs{\phi(y)} \geqslant \frac{1}{2(1 - \abs{y})^{\frac{1}{\epsilon}}}, \\ \therefore\ \abs{\phi''(y)} \leqslant \frac{1}{\epsilon}\left(\frac{1}{\epsilon} + 1\right) \frac{1}{(1 - \abs{y})^{\frac{1}{\epsilon} + 2}} \leqslant \frac{1}{\epsilon}\left(\frac{1}{\epsilon} + 1\right) (2\abs{\phi(y)})^{1 + 2\epsilon}, \end{gather*} as required.

Now, $\abs{n} \geqslant 1 + A/\delta \geqslant 2$, therefore $1 + 1/\abs{n} \leqslant 3/2$, and: \begin{gather*} \abs{f(\phi(y_n))\frac{\phi''(y_n^*)}{2}} \leqslant \frac{(1 + \epsilon)2^{1 + 2\epsilon}M\abs{\phi(y_n^*)}^{1 + 2\epsilon}} {2\epsilon^2\abs{\phi(y_n)}^{1 + \epsilon}} \\ \leqslant \frac{(1 + \epsilon)2^{1 + 2\epsilon}M((\abs{n} + 1)\delta)^{1 + 2\epsilon}} {2\epsilon^2(\abs{n}\delta)^{1 + \epsilon}} = \frac{(1 + \epsilon)2^{1 + 2\epsilon}M(\abs{n}\delta)^\epsilon} {2\epsilon^2} \left(1 + \frac{1}{\abs{n}}\right)^{1 + 2\epsilon} \\ \leqslant \frac{(1 + \epsilon)3^{1 + 2\epsilon}M(N(\delta)\delta)^\epsilon} {2\epsilon^2} = -\frac{(1 + \epsilon)3^{1 + 2\epsilon}M^2K\log\delta}{\epsilon^2}. \end{gather*} But, as was noted a moment ago: $$ \phi'(y) \geqslant \frac{2}{\epsilon} \ \ (\abs{y} < 1), $$ therefore: $$ y_{n + 1} - y_n = \phi^{-1}((n + 1)\delta) - \phi^{-1}(n\delta) \leqslant \frac{\epsilon\delta}{2} \ \ (n \in \Z). $$ By our previous argument, it follows that the sum in $\eqref{eq:L}$ is bounded above by $$ -\frac{(1 + \epsilon)3^{1 + 2\epsilon}M^2K\delta\log\delta}{2\epsilon} $$ which tends to $0$ with $\delta$, as required. This completes the proof of $\eqref{eq:A}$.