Let $\{t_0,t_1,\ldots,t_{N}\}\subset [0,1]$ be a collection of points from a Poisson process with intensity $\lambda$. Here, for convenience, assume that $t_0=0$. Then what is the convergence rate of maximum interval of these points,
\begin{align*} \max_{i:t_i\in(0,1]} (t_i-t_{i-1}) = O_p(?) \end{align*}
as $\lambda\rightarrow \infty$?
Here, what I have done.
Let $\max \Delta t_i = \max_{i:t_i\in(0,1]} (t_i-t_{i-1})$.
We have $N\sim \operatorname{Pois}(\lambda)$, and "without considering" the restriction $t_N \le 1$, each interval follows \begin{align*} t_i - t_{i-1} \sim \operatorname{Exp}(\lambda), \end{align*} independently.
So, for some fixed $n$, we have \begin{align*} \def\bbP{{\mathbb{P}}} \def\eps{{\varepsilon}} \bbP\{ \max_{i\le n} \Delta t_i > \eps \} &= \bbP \left\{ \Delta t_1 > \eps, \ldots, \Delta t_n > \eps\right\} \\ &= \prod_{i=1}^n e^{-\lambda \eps} \\ &= (e^{-\lambda \eps})^n. \end{align*} Since $N$ is not a constant, but a random, we have to sum up with probability $\bbP\{N=n\}$ as follows: \begin{align*} \bbP\{ \max_{i\le N} \Delta t_i > \eps \} &= \sum_{n=0}^{\infty} (e^{-\lambda \eps})^n \bbP\{N=n\} \\ &= \sum_{n=0}^{\infty} (e^{-\lambda \eps})^n \frac{\lambda^n e^{-\lambda}}{n!} \\ &= e^{-\lambda} \sum_{n=0}^{\infty} \frac{\left(e^{-\lambda \eps}\lambda\right)^n}{n!} \\ &= e^{e^{-\lambda \eps}\lambda - \lambda} \\ &= e^{\lambda \left( e^{-\lambda \eps} - 1\right)}. \end{align*}
Then we have $\max_{i\le n} \Delta t_i \xrightarrow{\bbP} 0$ by \begin{align*} \lim_{\lambda \rightarrow \infty} \lambda \left( e^{-\lambda \eps} - 1\right) = \lim_{\lambda \rightarrow \infty} -\lambda=-\infty, \end{align*} and the definition of in probability convergence: \begin{align*} \bbP\{ \max_{i\le n} \Delta t_i > \eps \} \rightarrow \lim_{\lambda \rightarrow \infty} e^{\lambda \left(e^{-\lambda \eps}-1\right)} = e^{-\infty} = 0. \end{align*} Thus, for some small $c>0$, \begin{align*} e^{\lambda \left( e^{-\lambda \eps} - 1\right)} &= e^{-c}, \\ \lambda \left( e^{-\lambda \eps} - 1\right) &= -c, \\ e^{-\lambda \eps} &= 1- \frac{c}{\lambda}, \\ \lambda \eps &= \log \left( \frac{\lambda}{\lambda-c} \right), \\ \eps &= \frac{\log \frac{\lambda}{\lambda-c}}{\lambda}, \end{align*} and we have the convergence rate (exactly) as \begin{align*} \max_{i:t_i\in(0,1]} (t_i-t_{i-1}) = O_p\left(\frac{\log \frac{\lambda}{\lambda-c}}{\lambda}\right). \end{align*}
However, I have no idea how to handle the restriction $t_N \le 1$.
Of course the convergence rate may be similar to this, but I don't know how to rigorously prove it.
Thanks,
I don't have a solution, but I have a couple of ideas:
1.
Conditional on $N$, the distribution of $(t_1,...,t_N)$ is the same as the distribution of the order statistics of a random sample of size $N$ from the Unif(0,1) distribution.
Now, let $M_N$ be the maximum spacing from a random sample from the Unif(0,1) distribution of size $N$.
$$P[N \times M_N -\log{N}<x|N]\rightarrow e^{-e^x}$$ $$E[N \times M_N -\log{N}|N]\rightarrow \gamma =0.5772157..$$ $$Var[N \times M_N|N]\rightarrow \frac{\pi^2}{6}$$
So, the unconditional mean is $$E[M]\approx E\left[\frac{\gamma+\log{N}}{N}\right]\approx \frac{\gamma+\log{\lambda}}{\lambda}$$
and the unconditional variance is
$$Var[M] \approx \frac{\pi^2}{6}E\left[\frac{1}{N^2}\right]+Var\left[\frac{\gamma+\log{N}}{N}\right] \approx \frac{\pi^2}{6}\frac{1}{\lambda^2}+\frac{\gamma+\log{\lambda}-1}{\lambda^3}$$ Devroye, Luc. "Laws of the iterated logarithm for order statistics of uniform spacings." The Annals of Probability (1981): 860-867.
2. This seems like a way to eliminate the restriction that $t_N<1$.
Let $N^*=ceiling(\lambda+\sqrt{\lambda}\log{\lambda})$
Then,
$$P\left[\max_{i\le N}\Delta t_i>x\right]$$ $$=P\left[\max_{i\le N}\Delta t_i>x,N<N^*\right]+P\left[\max_{i\le N}\Delta t_i>x,N\ge N^*\right]$$ $$\le P\left[\max_{i\le N^*}\Delta t_i>x,N<N^*\right]+P\left[\max_{i\le N}\Delta t_i>x,N\ge N^*\right]$$ $$\le P\left[\max_{i\le N^*}\Delta t_i>x\right]+P\left[N\ge N^*\right]$$
$N$ has mean and variance $\lambda$ and is asymptotically normal, so $P[N \ge N^*]\approx 1-\Phi(\log{\lambda})\rightarrow 0$ where $\Phi$ is the standard normal cdf.
For large $\lambda$, $\lambda-\sqrt{\lambda}\log{\lambda}<N<\lambda+\sqrt{\lambda}\log{\lambda}$ with probability approaching 1.
Therefore, $P\left[\max_{i\le N}\Delta t_i>x\right]$ is bounded between the similar probabilities with these lower and upper bounds replacing the $N$.