PDE with integral

1k Views Asked by At

I came across the following functional equation : $$\partial_t \phi(t,x) = a -(x+a)\phi(t,x)+ x \left[ \int_0^\infty \nu(x,y) \phi(t,y) dy \right]^2 $$ where $\nu(x,y)$ is a log-normal distribution function: $$\nu(x,y)=\frac{1}{\sqrt{2\pi} \sigma y} \exp \left(-\frac{(\log y - \log x)^2}{2\sigma^2} \right)$$ In this, $a$ is positive, $x$ and $t$ are positive, and initial conditions are $\phi(0,x)=\phi_0$ for all $x$, $0 < \phi_0 < 1$.

I'm solving this numerically (by discretization of the integral) but I need a lot of values of $x$ and this takes a lot of computing time. I'm wondering if there are some methods to simplify the computation or improve the convergence, or even better if there is a closed form !

I tried to find the limit as $t\rightarrow \infty$ but I cannot solve the integral equation, is there some way to do it ?

Edit :

  • I tried to expand in $x$ as suggested below but the series is badly divergent because of $e ^{n^2}$ like terms and does not approximate well the numerical solution.

  • I thought of defining an operator $$A[\phi(x)]=\int_0^\infty \nu(x,y) \phi(t,y) dy$$ and solve the differential equation as an ODE $$\partial_t \phi = a - B \phi + x[A\phi]^2$$ and then interpret the solution as an analytic series of the operators $B$ and $A^{-1}$. Does this have any chance of succeeding, and if so how can we invert $A$ ?

Just to be clear, my main goal is to improve the speed and reliability of the numerical integration of the above equation (the computation is long and I have to cut off the integral at some point). I'll be also happy if some analytic properties of the solution can be found.

2

There are 2 best solutions below

1
On

As I was looking at the Wikipedia page for the log-normal distribution I noticed that it apparently has the property. $$E[X^n] = e^{n\mu + n^2 \sigma^2 /2}$$ This I find a bit suggestive as using your variation of the distribtion it seems to imply $$\int_0^\infty \nu(x,y)y^n \, dy = e^{n\log x + n^2\sigma^2/2}= x^n e^{n^2 \sigma^2 /2}$$ meaning $x^n$ is sort of an eigenfunction of the integration operator. I'm sure that can be used somehow but my initial attempt to exploit it was just introducing a series representation of the function $$\phi(t,x) = \sum_{n = 0}^\infty c_n(t)x^n$$ and inserting it into the equation. This doesn't seem to end up working but is at least an idea. Firstly

$$\int_0^\infty \nu(x,y)\phi(t,y)\, dy = \sum_{n = 0}^\infty c_n(t)x^ne^{n^2 \sigma^2 /2}$$ then squaring the series and applying cauchy product formula $$\left (\sum_{n = 0}^\infty c_n(t)x^ne^{n^2 \sigma^2 /2} \right )^2 = \sum_{n = 0}^\infty \left (\sum_{k = 0}^n c_{k}(t)e^{k^2\sigma^2/2}c_{n-k}(t)e^{(n-k)^2\sigma^2/2} \right )x^n$$ followed by some simplifications $$\left (\sum_{n = 0}^\infty c_n(t)x^ne^{n^2 \sigma^2 /2} \right )^2 = \sum_{n = 0}^\infty \left (\sum_{k = 0}^n c_{k}(t)c_{n-k}(t)e^{n(n - 2k)\sigma^2/2}\right )x^n$$

Plugging everything into the original equation we get this mess

$$\sum_{n = 0}^\infty c_n'(t)x^n = a - (x + a)\sum_{n = 0}^\infty c_n(t)x^n + x \sum_{n = 0}^\infty \left (\sum_{k = 0}^n c_{k}(t)c_{n-k}(t)e^{n(n - 2k)\sigma^2/2}\right )x^n$$

where after collecting all the sums we finally obtain

$$\sum_{n = 0}^\infty c_n'(t)x^n = a - ac_0(t) + \sum_{n = 1}^\infty \left (\sum_{k = 0}^{n-1} c_{k}(t)c_{n-1-k}(t)e^{(n-1)(n-1- 2k)\sigma^2/2} - ac_n(t) - c_{n-1}(t)\right ) x^n$$

Then you can maaybe you could truncate this and set up differential equation

$$c_n'(t) = f_n(c_1(t), c_2(t), ..., c_N(t)), \quad (0\leq n \leq N)$$

where at least you don't have to deal with the integral anymore but the $e^{n^2\sigma^2/2}$-factors are kind of scary when it comes to convergence and I'm reserved as to whether it works.

EDIT A few hours later looking at this it makes a lot more sense to incorporate the exponential factors in the series expansion instead, that is starting from the series exansion $$\phi(t,x) = \sum_{n = 0}^\infty c_n(t)e^{-n^2 \sigma^2 / 2}x^n$$

$$\sum_{n = 0}^\infty c_n'(t)e^{-n^2 \sigma^2 / 2}x^n = a - ac_0(t) + \sum_{n = 1}^\infty \left (\sum_{k = 0}^{n-1} c_{k}(t)c_{n-1-k}(t)- ac_n(t)e^{-n^2 \sigma^2 / 2} - c_{n-1}(t)e^{-(n-1)^2 \sigma^2 / 2}\right ) x^n$$ Which renders a somewhat simpler form of the associated differential equation $$c_n'(t) = e^{n^2\sigma^2/2}\sum_{k = 0}^{n-1} c_{k}(t)c_{n-1-k}(t) - ac_n(t) - c_{n-1} e^{(2n + 1)\sigma^2/2}, \quad (n \geq 1)$$

2
On

A change of variables helps. Put $u=\log(x)$, $v=\log(y)$, $\phi(t, x) = f(t, u)$. Define $n(w) = \frac{1}{\sqrt{2\pi}\sigma}\exp(\frac{-w^2}{2\sigma^2})$ so that $\nu(x, y) = y n(u-v)$. Then the PDE becomes:

$$\frac{\partial f(t, u)}{\partial t} = a - (e^u + a)f(t, u) + e^u \left[\int_{-\infty}^{\infty} n(u-v) f(t, v) \mathrm{d}v\right]^2$$

The integral is then just a convolution, which blurs $f$ on length scale $\sigma$. This constrains how many samples to take in the $u$ direction: the spacing (uniform in $u$ not $x$) should be no more than about $\sigma$. Also, the spacing should be no more than about 1 in order to represent the function $e^u$ properly. The question does not say so, but I'm guessing $\sigma \ll 1$, i.e. the former constraint is more important.

To study the stability, let us make an approximation. Suppose that $f$ is already fairly blurry on the relevant scale, so that blurring it further makes no difference. Then we can approximate the integral by $f(t, u)$. This gives a nice simplification:

$$\frac{\partial f}{\partial t} = (e^u f - a)(f-1)$$

There are two steady states: $f=1$ and $f=ae^{-u}$. Differentiate the RHS wrt $f$ at each of these points to show that the former is stable (attractive) if $e^u< a$ and the latter is stable if $e^u > a$. The question is kind enough to grant us $f<1$, so we are in the region of convergence for all $u$, and we can expect $f \rightarrow \min(ae^{-u}, 1)$ as $t \rightarrow \infty$.

This approximate analysis also gives us the rate of convergence: the time step needs to be less than about $e^{-u}$. It's a pity that this depends on $u$, but all the interesting behaviour occurs in a fairly small $u$ interval.

Without the approximation, I find it difficult to guess what will happen. I suppose you'll just have to try!