I am having a difficult time understanding this paragraph from example II.2.1 of Statistical Models Based on Counting Processes!
Let $T$ denote the time of some random event. The indicator process $(I(T\leq t))$ is a cadlag process, equal to zero until time $T$, then jumping to the value 1 at time $T$ (if the event ever occurs), and then staying at that value. One easily checks that the indicator process $(I(T\leq t))$ is adapted if and only if $T$ is a stopping time.
If $X$ is a stochastic process and $T$ is a stopping time, it is not self-evident that $X(T)$is, indeed a random variable, i.e., that $X(T(\omega),\omega)$ is measurable as a function of $\omega\in \Omega$.
The part in bold is not clear to me at all! Why is this the case?
If your (real-valued), say) stochastic process $X$ is defined on a probability space $(\Omega,\mathcal F, P)$ and if $\psi:(\omega,t)\mapsto X_t(\omega)$ is $\mathcal F\otimes\mathcal B_{[0,\infty)}/\mathcal B_{\Bbb R}$-measurable, then $\omega\mapsto X_{T(\omega)}(\omega)$ is a random variable, being the composition $\psi\circ\phi$, where $\phi(\omega):=(\omega,T(\omega))$ is evidently $\mathcal F/(\mathcal F\otimes\mathcal B_{[0,\infty)})$-measurable.
If you know more — namely, that $X$ is progressively measurable — then the same sort of reasoning shows that $X_T$ is $\mathcal F_T$-measurable.