Why proof of linearity of integration for simple functions requires non-negative coefficients?

198 Views Asked by At

In Folland's Real Analysis there's the following proposition in which (a) and (b) lead to the conclusion of linearity of integration for non-negative combinations:

enter image description here

Why do we require that $c\geq 0$? I tried thinking of a counter example where this would not necessarily hold for $c<0$, but I'm not seeing it.

1

There are 1 best solutions below

0
On

By definition of $L^+$, any function $f \in L^+$ assumes only non-negative values. The integral we have at this point has domain $L^+$. Particularly, $\int$ is undefined on a function taking negative values. So the term $\int cf$ only makes sense if $cf \ge 0$.
I'm sure the author will soon define an integral which can handle more functions, i.e. has domain containing $L^+$. And this "advanced" integral (perhaps the Lebesgue-integral) will be a linear mapping (and not just linear for non-negative combinations).