Clarification in stochastic integration

157 Views Asked by At

In the book "Stochastic Processes" by Bass R.F. when he constructs the Stochastic Integral, at some point he defines for $Y$ predictable $$||Y||_2= \left(\mathbb E \int_0^{\infty}Y_t^2\text{d} \langle M\rangle _t \right)$$ and he claims that if we have a predictable $H$ with $$\mathbb E\left( \int_0^{\infty} H_s^2 \text{d} \langle M \rangle_s\right)<\infty$$ we can approximate it by a sequence $H^n_s$ of the form 10.3 in the book, namely: $$X_s(\omega) = \sum_{j=1}^{J} K_{j}(\omega) 1_{(a_j,b_j]}(s)$$ with $K_j$ bounded (that is, sums of basic processes of the form $Y_s(\omega)=K(\omega)1_{(a,b]}(s)$). Now, his justification is that this can be done because he previously proved that the $\sigma$-field of predictable processes is generated by processes of this form (Lemma 10.1 in the book). If I understand correctly, what he means is that there exists a sequence $H^n$ converging to $H$ and this would imply that $||H-H^n||_2$ converges to $0$. But why is this true? Is he using some sort of dominated convergence? Thanks for the help!