Backwards PDE with Gaussian Kernel

143 Views Asked by At

I have to solve a backwards diffusion PDE of the following form:

$\partial_t u-\frac{1}{2}(\partial_x u)^2+\frac{\sigma^2}{2}\partial_{xx}u=0;$ $u(x=0,t)=c(t).$

Once I use the Cole-Hopf transformation $$u(x,t)=-\sigma^2\ln{v(x,t)}$$ I obtain

$$\partial_t v+\frac{\sigma^2}{2}\partial_{xx}v=0; v(x=0,t)=\exp({\frac{c(t)}{\sigma^2}})$$

I know how to solve this, but now the solution states that with the usual convolution we obtain

$$v(x,t)=-x\int_{0}^{\infty}\frac{\exp({-\frac{c(t+\tau)}{\sigma^2}})}{\tau}G(x,\tau)d\tau,$$ where G is the Gaussian Kernel $$\frac{1}{\sqrt{2\pi t\sigma^2}}\exp({\frac{-x^2}{2\sigma^2t}})$$ Where does the term $$-\frac{x}{\tau}$$ in the final integral come from?