I am currently reading the book "Counting Processes and Survival Analysis" by Fleming and Harrington and I am stuck in the proof of Proposition 1.4.2 p.34:
Proposition 1.4.2:
Let $(T,U)$ be a failure and censoring time respectively and let $X=min(T,U)$. If:
$Λ(t)=\int_{0}^{t} (1-F(u-))^{-1} \,dF(u)$, then the right continuous process:
$A(t)=\int_{0}^{t} I_{(X\geq u)} \,dΛ(u)$ is predictable with respect to the filtration defined as:
$ \mathcal{F_{t}}=$ σ{ $I_{(X\leq u,δ=0)}$ , $I_{(X\leq u, δ=1)}$: $0\leq u\leq t$}.
The predictable σ-algebra is defined as the set that is generated by all the rectangles of the form:
$\{{0}\}\times A, A\epsilon\mathcal{F_{0}}$ and $[a,b) \times A, 0<a\leq b\leq\infty$, $A\epsilon\mathcal{F_{a-}}$
The proof starts by defining:
$A_{mn}(t)=(Λ(\frac{n+1}{2^{m}})-Λ(\frac{n}{2^{m}}))I_{(\frac{n}{2^{m}},\infty)}I_{(X\geq \frac{n}{2^{m}})}$
and
$A_{m}(t)=\sum_{n=0}^{\infty}A_{mn}(t)$.
If $A_{mn}(t)$ is predictabe and $A_{m}(t)$ converges to $A$ as $m\rightarrow\infty$ then $A$ must be predictable as a limit of predictable processes. I have understood why $A_{mn}(t)$ is predictable but the second part regarding $A_{m}(t)\rightarrow A$ for $m\rightarrow\infty$ seems non trivial and it is not explained thoroughly.
How does this convergence occur?