I'm reading a book (Lieb–Loss) and in Section 2.2, they present a proof of Jensen's inequality and there's a step I don't quite understand. To set this up, suppose $J:\mathbb R\to\mathbb R$ is convex. Suppose, moreover, that $f:\Omega\to\mathbb R$ is in $L^1(\Omega)$. We assume that $\mu(\Omega)<\infty$, where $\mu$ is a measure on $\Omega$.
Now, in the proof, they first prove that $$[J(f)]_-(x)=\begin{cases}0&\text{ if}~[J(f)](x)\ge0\\-J(f(x)&\text{ otherwise}\end{cases}$$ is bounded $c_1+c_2f(x)$ for some constants $c_i$, which proves that $[J(f)]_-(x)$ is integrable. But now they simply say that $\mu(\Omega)<\infty$ implies that $$\int_\Omega(J\circ f)(x)d\mu$$ is well-defined, although it might be $+\infty$.
I'm not quite understanding this final step, however.