Apparent paradox in the use of certain integration formulas

107 Views Asked by At

background

here is a formula which we all know in integral: $$ \int_{a}^{b}f(x)dx=\int_{a}^{b}f(a+b-x)dx $$ and this, too: $$ \int_{a}^{b}f(x)dx=\int_{a}^{\frac{b+a}{2}}f(x)+f(a+b-x)dx $$ and we also know, in graphic view, these formulas could work because the $f(x)\space$ and the $f(a+b-x)\space$ are symmetric to each other in $[a,b]$ and the axis of their symmetry is $x=\frac{a+b}{2}$.

so that we could 'fold them up' to a smaller range and calculate it both.


question body statement

now, what if we 'fold up' the range again?

$$let \space b_{1}=\frac{a+b}{2},\space f_{1}(x)=f(x)+f(a+b-x)$$ and we get these below: $$\int_{a}^{b_{1}}f_{1}(x)dx=\int_{a}^{b_{1}}f_{1}(2b_{1}-x)dx $$ $$ \int_{a}^{b_{1}}f_{1}(x)dx=\int_{a}^{\frac{b_{1}+a}{2}}f_{1}(x)+f_{1}(2b_{1}-x)dx $$ they also could work, and $\int_{a}^{b_{1}}f_{1}(x)dx=\int_{a}^{b}f(x)dx$,right?

suppose above equations could make sense, then we keep on doing this procedure, what will happen?

first,we define some variables $$b_{n}=\frac{a+b_{n-1}}{2},b_{0}=b, $$ $$ f_{n}(x)=f_{n-1}(x)+f_{n-1}(2b_{n-1}-x), $$ $$ I_{n}=\int_{a}^{b_{n}}f_{n}(x)dx $$

as a result, we get a series: $$\{I_{n}\}$$ as noted before, we also have a eternal equation: $$\{I_{n}\}\equiv\int_{a}^{b}f(x)dx$$


here comes the paradox!

if we let $n$ trend to $\infty$,we would get: $$ \lim_{n\rightarrow\infty}I_{n}=\int_{a}^{b}f(x)dx $$ obviously ,right?

but, we also get: $$ \lim_{n\rightarrow\infty}b_{n}=a $$ that means: $$ \lim_{n\rightarrow\infty}I_{n}=\int_{a}^{b_{n}}f_{n}(x)dx=\int_{a}^{a}f_{n}(x)dx=0 $$

why? how could this happen?

in graphic view, it is obvious, after infinite folding , the area of the zone between the $f(x)$ and y=0 will trend to zero, but in algebra view, this shall not happen.

how to explain the paradox?

1

There are 1 best solutions below

0
On

Actually, in the graphical view it is not obvious that "after infinite folding" the area under the curve is zero.

Let's take a very simple example: a constant function, $f(x) = 1.$ Then $$f_1(x) = f(x) + f(a+b−x) = 1 + 1 = 2.$$ So in going from $\int_a^b f(x)\,dx$ to $\int_a^{b_1} f_1(x)\,dx$ we have cut the horizontal distance in half ($b_1 - a = \frac12(b - a)$) but we have doubled the height of the graph. Hence the area remains the same. Fold again and you will have $\frac14$ as much horizontal distance, but $4$ times as much height.

For non-constant functions you will usually find that $f_1 \neq 2f,$ that is, the function is not exactly doubled everywhere, but the average height of $f_1$ over $[a,b_1]$ will be twice the average height of $f$ over $[a,b].$ After $n$ folds you have a function $f_n$ whose average height over $[a,b_n]$ is $2^n$ times the average height of $f$ over $[a,b].$ In other words, no area is ever lost under the graph, it just gets piled up higher and closer to the $y$-axis with every fold.

In order to sketch a graph of the integral at the limit, $b_\infty=a,$ you would have to somehow plot a function $f_\infty$ whose value on $[a,a]$ is exactly $2^\infty$ times the average value of $f$ over $[a,b].$ There is no such function in real analysis, but if you assume (incorrectly) that there is such a function that you can integrate on $[a,a],$ you will conclude that the integral is zero.

So one mistake is assuming that there is any meaning at all to the integral $\int_a^{b_\infty} f_\infty(x)\,dx,$ either arithmetically or graphically. There is no graphical interpretation of what the integral would look like "after infinite folding."

But there is actually another mistake: you assume that you can evaluate a limit by jumping to the limiting case. Consider this false "proof": \begin{align} 1 &= \frac12 + \frac12 \\ &= \frac14 + \frac14 + \frac14 + \frac14 \\ &= \frac18 + \frac18 + \frac18 + \frac18 + \frac18 + \frac18 + \frac18 + \frac18 \\ & \qquad\vdots \\ &= \frac{1}{2^n} + \frac{1}{2^n} + \cdots + \frac{1}{2^n} \quad \text{($2^n$ terms)} \\ & \qquad\vdots \\ &= 0 + 0 + 0 + \cdots \\ &= 0. \end{align} The fallacy here is in the jump from a finite sum of non-zero terms to the sum $0+0+0+\cdots$. There's simply no justification for that step.