Understanding solution to $y' = y$ and exponential distribution

302 Views Asked by At

My Understanding:

I would derive the exponential random variable as follows:

I consider an experiment which consists of a continuum of trials on an interval $[0,t)$. The result of the experiment takes the form of an ordered $n$-tuple $\forall n \in \mathbb{N}$ containing distinct points on the interval. Every outcome is equally likely and I measure the size of the set containing tuples of $n$ different points by $I_n$ as:

$$ I_n = \int_0^{t} \int_0^{x_{n}} \int_0^{x_{n-1}} \cdots \int_0^{ x_2 } dx_1 dx_{2} dx_{3} \dots dx_{n-1} dx_{n} = \frac{ t^n } { n! }$$

Since the size of the set where no success occurs, $I_0 = 1$, the ratio of these sets gives the probability that no event occurs and its complement yields the c.d.f:

$$ \begin{align} P(\text{no successful trial in } [0,t)) &= \big(\sum_{n = 0}^{\infty} I_n(t)\big)^{-1} = e^{-t} \\ P(X \leq t) &= 1 - e^{-t} \\ \end{align} $$

Where I Lose Intuition:

It's easy to arrive at a power series solution to the ODE:

$$y' = y \text{ with } y'(0) = 1$$ $$ \boxed{ y = \sum_{n = 0}^{\infty} \frac{ t^n }{ n ! } = e^{t}} $$

My problem is that I do not understand the role of each term in the expansion. Substitution by power series is an attractive idea, but I have no deep intuition as to why we'd do this and hence, I'm having trouble putting it all together to understand the solution.

Question

How do I interpret the power series solution of the ODE? Hopefully this will allow me to reconcile my understanding of both these processes.

I am not willing to accept this as pure coincidence.

3

There are 3 best solutions below

2
On

A Poisson process with rate $\lambda$ is a limit of a Bernoulli process which makes an attempt to jump at each time $k/n,k \in \mathbb{N}$ and succeeds with probability $\lambda/n$, as $n \to \infty$. Here $n$ is also in $\mathbb{N}$ but you must have $n \geq \lambda$ for this to make any sense.

So the probability of no successes in $[0,k/n]$ is given by the Binomial($k$,$\lambda/n$) distribution to be $(1-\lambda/n)^k$. You now set $k=t n$ for some $t>0$ and send $n \to \infty$. This constraint on $k$, or something very similar, is necessary so that the limit will be in $(0,1)$. It is also physically meaningful, since it is just setting the total time to wait.

The limit you get is $e^{-\lambda t}$, which we call the hazard function of the exponential distribution. The intuition comes from going back to $(1-\lambda/n)^{tn}$ and recognizing something like compound depreciation: each attempt to get another success depreciates your probability of having had no jumps by another factor of $(1-\lambda/n)$, which in the continuum limit behaves like an exponential. The power series $e^{-\lambda t}=\sum_{k=0}^\infty (-\lambda t)^k/k!$ has no real probabilistic significance, but you can compare it to the binomial expansion of $(1-\lambda/n)^{tn}$ (which has different terms, but the terms of the latter converge to the terms of the former).

3
On

I assume what you are really asking is why the power series solves the differential equation (apart from the fact that it satisfies the differential equation). The fact that $$ y=\sum\limits_{n=0}^{\infty }{\frac{{{t}^{n}}}{n!}} $$ solves the differential equation arises from the Picard iteration process:

$$\begin{align} y\left( t \right)&=1+\int\limits_{0}^{t}{y\left( s \right)ds=1+t+\int\limits_{0}^{t}{\left( t-s \right)y\left( s \right)ds}}=\ldots \\ & =\sum\limits_{j=0}^{n}{\frac{{{t}^{j}}}{j!}}+\int\limits_{0}^{t}{{{\left( t-s \right)}^{n}}y\left( s \right)ds} \\ \end{align}$$

The integral term vanishes as $n \to \infty$ so we are left with the power series as the solution. Note that in particular if $y' = f(y)$ then the iteration process would look like,

$$ \begin{align} y'&=f\left( t \right) \\ {{y}_{1}}&={{y}_{0}}+\int\limits_{0}^{t}{f\left( s \right)ds} \\ {{y}_{2}}&={{y}_{0}}+\int\limits_{0}^{t}{\left( {{y}_{0}}+\int\limits_{0}^{t}{f\left( s \right)d{{s}_{1}}} \right)}d{{s}_{2}} \\ & \vdots \end{align} $$

So to answer your question about each term in the power series, it can be viewed as the contribution of the $n$-th step of the iteration process to the solution; in particular each term arises from an iterated integral of $y_0$ over the interval $[0,t]$.

0
On

We can try the following approach maybe... So, trying to solve $y'(t)=y(t)$ with $y(0)=1$. Suppose we don't know the solution. However, we know that the system acts as if being transported by an impulse or by energy generated by the derivative $d/dt$. Try to look at this as if it was a physics problem. Suppose you know your system is described by $E=1$ classically, with $E$ being energy. This is trivial. But... When going over to quantum theory, energy $E$ becomes an operator, and so $E=1$ becomes $\hat{E}\phi(t)=\phi(t)$ with $\hat{E}$ being an operator now, let's say for simplicity $\hat{E}=d/dt$. And $\phi(t)$ is the probability density amplitude. So what do we have now? We have $y'=y$... More precisely: $\phi'(t)=\phi(t)$.

So what do we do in physics when we can't solve something? And we can't solve anything exactly actually... What do we do? We seek approximate solutions. And what about the approximate solutions? Well... If I take infinitely many terms in, an approximate solution becomes exact one. It's approximate only because I take finitely many terms in.

For instance, classically a relativistic particle moves so that $E^2-p^2-m^2=0$. If there's a potential present, then $E^2-p^2-m^2=J$. Here $J$ is the source function. We can't solve this exactly... So what do we do? We solve without the source term and get a zeroth order approximation. A solution for a free particle. We then put that zeroth order approximation back in and solve again. We get the first order approximation. And so on. Put the first order approximation back in and repeat.

So what if we try to solve $y'=y$? Well, $y'$ is the trivial part of the equation, and $y$ is the source part. So, as in physics: first solve the trivial part,

$y'(t)=0$

The solution is obviously

$y_0=1$

Put this back in as a source:

$y'(t)=1$

This solves to

$y_1=1+t$

Put this back in:

$y'(t)=1+t$

This solves to

$y_2=1+t+\frac{t^2}{2}$

Put this back in:

$y'(t)=1+t+\frac{t^2}{2}$

This solves to

$y_3=1+t+\frac{t^2}{2} +\frac{t^3}{2\times 3}$

Put this back in:

$y(t)=1+t+\frac{t^2}{2} +\frac{t^3}{2\times 3}$

This solves to

$y_4=1+t+\frac{t^2}{2} +\frac{t^3}{2\times 3}+\frac{t^4}{2\times 3\times 4}$

So our final solution is the $y_n$ as $n$ becomes infinitely large, the solution with infinitely many terms. You already know what it is, I can tell XD

Do notice that each $y_n$ is just your $I_n$ you began with. Sort of... It's the sum of $I_n$s. So you already have the probabilistic explanation. Is the iterative method the physical answer you were looking for? Each term $I_n$ is a refinement generated by all the previous terms...

We started with a free particle: $E=0$, or $y'=0$ for the free particle solution $y$. Then we said that free particle interacts with a constant source $1$. Particle is no more free. We want to solve $y'=y$ now, but we solve for $y'=y_0$ instead. What does it mean? It means only the free particle $y_0$ coupled to the source $1$. The energy is now the free energy of a free particle, plus the coupled energy. Free energy is zero because we started with $E=0$. So the new energy is now just the coupling: $\hat{E}y=1\times y_0$. The wave function $y_1$ is now the free particle function $y_0=1$ plus the function of the "coupling" of the free particle to the external source $J=1$. So what happens next? Now not only free particle couples to the source, the already coupled part also couples again to the source! And so on... So... particle itself is its own source, in some sense...

Now, I could go on. But... The procedure described here is exactly what we do when we do the Feynman path integrals when solving approximately in, say, quantum electrodynamics! So, you really have to take a peek at quantum electrodynamics! It gets a bit complicated here, though, because you'll need the partition function and quantum vacuum and so on, because we fill all the states available inside quantum vacuum with some probability, creating $n$-tuples for every natural $n$... You need the full blown quantum field theory here with creation and annihilation operators that create those $n$-tuples of virtual particles in quantum vacuum. But it's the good thing! How so? Well, it explains exactly what you asked! In a physical way.

So what are the $I_n=t^n/n!$ terms? I believe these are called "virtual particles". More exactly, the probability density amplitude of interaction of a particle with vacuum in a potential...

So, if this does answer your question, then you need quantum field theory and its specialization quantum electrodynamics. You can google it, there are free books online for both. You can start here I guess https://en.wikipedia.org/wiki/Quantum_field_theory#Principles

There's this renormalization issue though... Ask another question again soon I guess! :D

Does this answer your question? It's both the closest intuitive explanation and the one that is the farthest from any intuition as well! :D

Oh, we started with the Runge-Kutta iterative approximating method... :D

Best regards! Good luck and chime in! :) o/