I am reading a proof concerning a conditional density $f(t|\mathcal{G_n})$ where $\mathcal{G_n}=\sigma(T_1,T_2,...,T_n)$ is a sub $\sigma$-algebra generated by random variables $T_1,...,T_n$. In the proof, they express this conditional density in terms of the probability of $T_{n+1}$ being in some infinitesimal interval $ds$ around $t$. They write: $$f(t|\mathcal{G_n})=\frac{\mathbb{P}(T_{n+1}\in[t,t+ds]|\mathcal{G}_n)}{ds}$$ Intuitively, this makes sense to me, but I'm not really sure how to understand this more rigorously. I know that a conditioal probability given a $\sigma$-algebra is the same as a conditional expectation of an indicator function, but how do we make sense of this equality and the left hand side?
Edit: $f(t|\mathcal{G_n})$ is the conditional density of event times in a point process given the first $n$ points.
You can define $f(t\mid\mathcal G_n)$ as $$ \lim_{k\to \infty}\frac{\mathbb P\left(t<T_{n+1}\leqslant t+\delta_k\mid\mathcal G_n\right) }{\delta_k}, $$ where $(\delta_k)$ is a deterministic sequence converging to $0$, provided that the limit exists and is independent of the choice of the sequence $(\delta_k)$.