Suppose I have a nonhomogenous Poisson process with a known rate function $r(t)$ over a time window $[0,T]$. Now suppose I use this process to generate events and perfectly measure the arrival time of each event, i.e. a countable set of event occurrences $0 < t_1 < t_2 < \ldots < t_N < T$.
How would I calculate the probability of this outcome occurring?
Here's what I have so far:
The way I tried to go about this was to sample the domain into $K+1$ evenly spaced time points spaced at intervals of $dt=1/K$. Note that for any $t \in (0,T)$ the probability of an event occurring in the window $(t, t+dt]$ is $r(t)\, dt$.
Now, assume we estimate each $t_i$ as the closest point of the form $f(i)\,dt$, where $f: \{1,\ldots,N\} \to \{1,\ldots,K+1\}$. Also assume that $K$ is large enough so $f$ is injective. Then the probability of the continuous outcome can be estimated as the probability of a series of $K+1$ weighted coin flips:
$$P_K \left( \{t_i\}_{i=1}^N\right) = \prod_{j=0}^K q_K(j) \implies$$ $$\log \left( P_K \left( \{t_i\}_{i=1}^N \right) \right) = \sum_{j=0}^K q_K(j)$$ where $q_k(j)=r(j\,dt)\,dt$ if $\exists i$ s.t. $f(i)=j$ and $q_k(j)=1-r(j\,dt)\,dt$ if $\nexists i$ s.t. $f(i)=j$.
My issue is that I want to turn this sum into an integral as $K\to\infty$. However, since the set of event times is countable and therefore measure $0$, it seems that this would always converge to the integral $$\int_0^T 1-r(t)\, dt$$ regardless of when or how many of these events occurred. But clearly some outcomes of this experiment must be more likely than others.
How should I go about solving this problem?
The likelihood of observing the timestamps $0<t_1, \ldots, t_N<T$ under the inhomogeneous poisson process with rate function $r$ is
$$p(\{t_1,\ldots,t_N\}) = \left(\prod_{i=1}^N r(t_i)\right) \exp\bigg(-\int_{0}^T r(s)ds\bigg)$$
You can obtain it as follows:
For each $n, t$, denote $p(n,t)$ the probability that there are $n$ events in $[0,t]$.
We have the relationship
$$ \begin{aligned} p(n,t+dt) &= p(n,t)p(\text{"no event in $[t,t+dt]$"}) + p(n-1,t) p (\text{"one event in $[t,t+dt]$"})\\ &= p(n,t)(1-r(t)dt) + p(n-1,t) r(t)dt\\ \end{aligned} $$
So
$$ \frac{p(n,t+dt)-p(n,t)}{dt} = r(t) \big[p(n-1,t) - p(n,t) \big] $$
Indeed if the process is simple (no simultaneous events), then for small $dt$, $r(t)dt $ is the probability of an event happening in a small interval of size $dt$
For $n=0$ the term on the left is the probability of observing no event up to $t+dt$. Taking $dt\rightarrow 0$ we have $\frac{dp(0,t)}{dt} = -r(t)p(0,t)$
which solves as $$p(0,t) = \exp\left(-\int_{0}^t r(s)ds\right) .$$
For $n$ greater than 0, you can use induction and similar reasoning to find the probability that your events happened at the time they did and that nothing happend between them.