Suppose there is a sequence of$N$ numbers $x_1, x_2, x_3, ... x_N$.
There are then gaps $|x_i - x_j|$, and the minimum gap:
$\delta (N) = \text{min}_{i \ne j \le N} \{ | x_i -x_j | \}$.
Let the mean gap be normalized to $1/N$.
If the sequence of numbers $x_1, x_2, x_3, ... x_N$ is random, the minimum gap
$\delta (N) \simeq \frac{1}{N^2}$
this is the birthday problem!
How to calculate this $1/N^2$ explicitly?
Or, where is the calculation shown?
See the problem described here: https://youtu.be/n8IMb2mW6TM?t=2396

You can do this using a sort of continuous version of the combinatorics technique sometimes called "stars and bars".
Let's just take the $x_k$ to be $N$ independent random numbers chosen uniformly in $[0,1]$; that will (at least to a good enough approximation) ensure that the mean gap is $1/N$.
Now the probability that the minimum gap is at least $\delta$ is just $(1-(N-1)\delta)^N$. Imagine distributing within the unit interval not $N$ mere points, but the left-hand endpoints of $N$ intervals of length $\delta$. The probability we seek is the probability that no two of these intervals overlap, which we should think of as the $N$-dimensional volume of the region in $[0,1]^N$ where that's true. When this happens we can imagine shrinking all those intervals to zero, moving each $x_k$ to $x_k-m_k\delta$ where $m_k$ is the number of points to the left of $x_k$; this transforms our $N$-dimensional region, in a volume-preserving way, to precisely $[0,1-(N-1)\delta]^N$.
Now, when $N$ is large and $\delta$ is small, this is approximately $\exp(-N^2\delta)$. (Ugly details below.)
So, for instance:
Here are (kinda) the ugly details of the $\exp(-N^2\delta)$ thing, for those who care:
Let's first of all work with $f(N,\delta):=(1-N\delta)^N$ instead of $g(N,\delta):=(1-(N-1)\delta)^N$. The latter lies between $(1-N\delta)^N$ and $(1-(N-1)\delta)^{N-1}$, so if we have good bounds for $f$ then we can use them to bound $g$.
Now, rather than actually doing the calculations, I refer the reader to this other math.stackexchange.com answer and the comment below it, which uses the Taylor series for the logarithm to find that $(1-x/n)^n$ differs from $\exp(-x)$ by at most $(1-a)^{-2} \frac{x^2}{2n}\exp(-x)$ provided $x<a$. That is, $(1-N\delta)^N$ differs from $\exp(-N^2\delta)$ by at most $\frac{(1-a)^{-2}}2 N\delta \exp(-N^2\delta)$ when $N\delta<a$. (And when $N\delta>a$, both are nice and small.) This bound is plenty good enough to justify the claims above.