Take any differential equation of the form
$$\frac{dy}{dx}=y^n$$
where $n > 1$. The solution $y(x)$ will reach infinity at a finite value of $x$.
Assuming $y_0 =1 $ for all cases, here are a few examples:
$$\frac{dy}{dx}=y^2$$ has the solution $$y=\frac{-1}{x-1}$$
which reaches its asymptote at $x=1$.
The DE
$$\frac{dy}{dx}=y^{1.01}$$ has the solution $$y=\left(\frac{-100}{x-100}\right)^{100}$$
which reaches its asymptote at $x=100$.
If you take any DE of the form
$$\frac{dy}{dx}=y^{1 + \epsilon}$$
where $\epsilon$ is a very small number, the solution is
$$y=\left(\frac{-1}{\epsilon(x-\frac{1}{\epsilon})}\right)^{\epsilon^{-1}}$$
which eventually hits the vertical asymptote at the very large number $\frac{1}{\epsilon}$
This has always bugged me. Intuitively, one expects that the solutions to these equations will grow rapidly and aggressively, much faster than the exponential function. But it is not entirely obvious why they should reach an infinite value after a finite time, instead of say, grow like the Ackermann function or some other function that grows rapidly but stays strictly finite.
Is there an intuitive argument for why these DEs are able to reach infinity in a finite timespan?
Your intuition that a solution to a DE like this should grow quickly but finitely makes a lot of sense. One justification for this intuition is to look at the estimation Euler's method would give: entirely finite and defined for the whole real line. To fix this inaccurate intuition, consider the following improvement of Euler’s method: instead of increasing x a constant amount each time, only increase x far enough to let y double. Since y doubles with each jump, $y^n$ Increases by $2^n$, so the ratio of the size of the horizontal jump from one jump to the next decreases by a factor of $\frac {2}{2^n}$. since n>1, this ratio is less than one. As a result the x-position converges, so y is doubling with out bound but x converges.