Examples where mathematical objects "come down from infinity" or blow up instantaneously?

275 Views Asked by At

I recently learned about a surprising fact about the coalescent, which is a model used in population genetics to describe the genealogical relationships among individuals drawn from large populations. Mathematically, the coalescent is a continuous-time Markov process $(\Pi_t)_{t \geqslant 0}$ taking values in the space of all partitions of $\{1,2,3,\dots\}$. The process is such that $\Pi_0 = \{\{1\},\{2\},\{3\},\dots\}$ and $\{\{1,2,3,\dots\}\}$ is absorbing. The only transition that occurs is as follows: $\pi \to \pi'$ at rate 1 if and only if $\pi'$ is obtained by merging exactly two blocks/pieces of $\pi$. Individuals $1,2,3,\dots$ are in the same piece of the partition at some time $t$ if and only if they've found a shared genetic ancestor (say, a parent or a grandparent) $t$ units of time in the past. The coalescent is the limiting process for the Wright-Fisher, Moran, and other forward-in-time models of evolutionary dynamics when the population size is taken to infinity (and time is rescaled to suit).

The fact is this: $\Pr(\# \Pi_t < \infty) = 1$ for all $t > 0$. (Wow!) In words, the number of linages that have yet to share ancestry is finite $\textit{at any time in the past}$. Coalescence is so strong that, in the first instance of time, almost all individuals in the sample "find each other" through shared ancestry.

I find this result, i.e., that the coalescent "comes down from infinity," striking. I am curious if/where in mathematics there are other examples where "infinite things" become finite instantaneously, or vice versa. The other example that comes to mind is in the behavior of the solution to the heat equation $u_t - \Delta u = 0$ for $x \in \mathbf{R}^n, t > 0$ with $u(x,0) = g(x)$. If $\Phi$ is the fundamental solution to the heat equation without initial data, then

$$u(x,t) = \int_{\mathbf{R}^n} \Phi(x-y,t) g(y) \text{d} y$$

solves the initial value problem. Assume $g \in C_b(\mathbf{R}^n)$ to ensure finiteness of the integral. If $g \geqslant 0$ is strictly positive in some region of positive measure, then $u(x,t)> 0$ for all $t > 0$. Although $g$ could be zero outside of this region, the solution will be positive everywhere and will remain positive for all time. The initial data are propagated instantaneously (everywhere in $\mathbf{R}^n$!). The fact $g > 0$ in some region (however small) means that the density of heat is positive out to infinity.

Results like these, where infinite things collapse and finite things become infinite in some sense, are nice. It is surprising just how quickly things unfold in the preceding examples. If anyone can provide examples in the spirit of the ones here, I would appreciate it!

2

There are 2 best solutions below

0
On

A silly little example, but I think already illustrates a useful point: the function $f_h(z)=e^z+he^{-z}$ has no complex zeros at all for $h=0$, but for any $h\not=0$, has infinitely-many.

(True, by Rouche's theorem, or by direct computation, any given compact subset of $\mathbb C$ is zero-free for $h$ in a sufficiently small nbd of $0$...)

0
On

"infinite things" become finite instantaneously, or vice versa

is covered by any theorem of the form

$f(x)$ is finite iff $x\in S$

with $S$ open, closed, half-open etc., according to topological taste. @NoahSchweber mentioned intervals of convergence, but the above idea generalizes beyond other interval-based facts to cases where $S$ need not be $1$-dimensional.

An easy example, concerning $z\in\Bbb C^n$: $\int_{\Bbb R^{+n}} e^{-z\cdot x}d^nx$ converges iff each $\Re z_i>0$ for $1\le i\le n$ (i.e. $z\in(\Bbb R^++i\Bbb R)^n$).

For an important, more interesting example (only $2$-dimensional on $\Bbb C$, but with a richer use of inequality types), consider the series $\sum_{n\ge1}z^nn^{-s}$, which has a famous analytic continuation, with $z,\,s\in\Bbb C$. Note that:

  • If $|z|<1$, the series converges iff $\Re s\ge0$;
  • If $z=1$, the series converges iff $\Re s>1$;
  • If $|z|>1$, the series diverges for all $s\in\Bbb C$.

Or equivalently:

  • If $\Re s<0$, the series converges iff $|z|<1$;
  • If $\Re s\in[0,\,1)$, the series converges if $|z|<1$;
  • If $\Re s>1$, the series converges iff $|z|\le1$.

Note that:

  • the information above does not completely specify when $\sum_{n\ge1}z^nn^{-s}$ converges, let alone how the analytic continuation on $\Bbb C^2$ works;
  • iff is not the same as if;
  • some of these conditions use strict inequalities, others slack ones.

I leave it to you to ponder the conditions under which the divergence is to an extended real number $\pm\infty$ or the extended complex number $\infty$, as opposed to exhibiting other kinds of divergence, such as oscillation in the real and/or imaginary part.