Does anyone have a clever method for finding the solution to this PDE? $$0=\frac{\partial}{\partial x}\left((Ax+C(x-y))p(x,y)\right)+\frac{\partial}{\partial y}\left((By+C(y-x))p(x,y)\right)+D\left[\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+2E\frac{\partial^2}{\partial x\partial y}\right]p(x,y)$$ for $p(x,y)$ given that $A,B,C,D,E$ are all positive constants with $0<E<1$. It is trivial for $E=0$ but I don't know how to deal with the cross derivative.
2025-01-13 02:03:59.1736733839
Stationary solution multidimensional Fokker-Planck equation/PDE
1.2k Views Asked by user3353819 https://math.techqa.club/user/user3353819/detail At
1
There are 1 best solutions below
Related Questions in PARTIAL-DIFFERENTIAL-EQUATIONS
- How to solve the following parabolic pde?
- How to transform this nonhomogeneous equation into a homogeneous one by change of variables?
- $L^2$-norm of a solution of the heat equation
- Navier-Stokes on concentric cylinders
- Eliminate all parameters from the differential equation $u_t-Au_x-Bu^3+Cu_{xx}=0$.
- Prove there do not exists such distribution.
- Solving a heat equaton.
- Laplace equation :mean value formula for gradient of weak solution
- Solution of the IVP $\frac{\partial{u}}{\partial{t}}+\frac{\partial^2{u}}{\partial{x^2}}=0$.
- When does a Riemaniann metric form a coercive quadratic form?
Related Questions in STOCHASTIC-PROCESSES
- Prove that: $E[\int^{\tau}_{0} f(t)d\omega(t)]=0$ and $E\mid \int^{\tau}_{0} f(t)d\omega(t)\mid^2=E[\int^{\tau}_{0} f^2(t)dt]$.
- Understanding the Fluid limit model in Queueing Theory
- Blackwell's example in Markov process theory and Kolmogorov's extension theorem
- Asymptotically unbiasedness of an weighted estimator
- Scaled limit of non-independent sequence of random variables
- Is there a better way to determine $E(X_{n+1}|X_{n})$ for this problem?
- Expectation of the product of two dependent binomial random variable
- Minimum distribution of random walk for $p > \frac{1}{2}$
- Compute Fourier Transform using conditional expectation
- Sigma algebra of cylinder events (stochastic processes)
Related Questions in STATIONARY-PROCESSES
- Is there any real world phenomena that can be modeled approximately as Strict Sense Stationary process?
- How does one reconcile the fact that a random walk is non-mean reverting with Polyas recurrence theorem?
- Testing for stationary process
- Are periodic functions stationary processes, e.g. y=sin(t)?
- Describing a linear predictor for a stationary process with a set of equations
- Writing power series for $AR(2)$ model polynomials
- Can a mixing process be non-stationary?
- Quantile lines of stationary process
- Quasistationary distribution for the Moran model
- Estimation of absolute sum of autocorrelations
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Refuting the Anti-Cantor Cranks
- Find $E[XY|Y+Z=1 ]$
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- What are the Implications of having VΩ as a model for a theory?
- How do we know that the number $1$ is not equal to the number $-1$?
- Defining a Galois Field based on primitive element versus polynomial?
- Is computer science a branch of mathematics?
- Can't find the relationship between two columns of numbers. Please Help
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- A community project: prove (or disprove) that $\sum_{n\geq 1}\frac{\sin(2^n)}{n}$ is convergent
- Alternative way of expressing a quantied statement with "Some"
Popular # Hahtags
real-analysis
calculus
linear-algebra
probability
abstract-algebra
integration
sequences-and-series
combinatorics
general-topology
matrices
functional-analysis
complex-analysis
geometry
group-theory
algebra-precalculus
probability-theory
ordinary-differential-equations
limits
analysis
number-theory
measure-theory
elementary-number-theory
statistics
multivariable-calculus
functions
derivatives
discrete-mathematics
differential-geometry
inequality
trigonometry
Popular Questions
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- How to find mean and median from histogram
- Difference between "≈", "≃", and "≅"
- Easy way of memorizing values of sine, cosine, and tangent
- How to calculate the intersection of two planes?
- What does "∈" mean?
- If you roll a fair six sided die twice, what's the probability that you get the same number both times?
- Probability of getting exactly 2 heads in 3 coins tossed with order not important?
- Fourier transform for dummies
- Limit of $(1+ x/n)^n$ when $n$ tends to infinity
This is a linear Fokker-Planck equation, so the solution is a Gaussian. The only parameters, the averages (which are zeros in this case) and variances, can be found by brute force, but the following method see for example van Kampen, page 212 is somewhat easier.
We will change the notation a bit for convenience $$ \sum_{i,j=1}^2 \frac{ \partial }{ \partial x_i } (A_{ij} x_j P) + B_{ij} \frac{\partial^2}{\partial x_i \partial x_j} P = 0. \qquad (1) $$
Let us first compute the Fourier transform (characteristic function) $$ G(k_1, k_2) = \int_{-\infty}^\infty \int_{-\infty}^\infty P(x_1, x_2) \, \exp\left(\sum_{i=1}^2 i k_i x_i\right) \, dx_1 \, dx_2. $$ Now by multiplying $\exp\left(\sum_{i=1}^2 i k_i x_i\right)$ and integrating by parts, we get $$ -\sum_{i,j=1}^2 k_i A_{ij} \frac{ \partial G } { \partial k_j } -k_i \, k_j \, B_{ij} \, G = 0. $$ At this point, we can impose that the generating function is Gaussian, and $$ \log G(k_1, k_2) = -\frac{1}{2} \sum_{i,j=1}^2 k_i \Xi_{ij} k_j, $$ where $\Xi_{ij}$ is the symmetric covariance matrix of $\operatorname{cov}(x_i, x_j)$. This quadratic form can also be readily shown by using the method of characteristics. Thus $$ \sum_{i,j,l=1}^2 k_i \, A_{ij} \, \Xi_{jl} \, k_l = \sum_{i,l} k_i \, B_{il} \, k_l. $$ Since this equation holds for all values of $k_1$ and $k_2$, and $B_{ij}$ is symmetric, we must have $$ \sum_{j = 1}^2 A_{ij} \Xi_{jl} + \Xi_{ij} A_{lj} = 2 \, B_{il}. $$ Or in matrix form $$ \mathbf{ A \Xi} + \mathbf{\Xi A}^T = 2 \, \mathbf B. \qquad (2) $$ Now this is the Lyapunov equation, and the solution (the integral form) is $$ \mathbf \Xi = 2 \int_0^\infty \exp(-\mathbf A \, t) \, \mathbf B \, \exp(-\mathbf A^T \, t) \, dt. \qquad (3) $$ In our case, this expression converges because $\mathbf A$ is positive-definite ($\operatorname{tr}\mathbf A = A + B + 2 C > 0$, and $\operatorname{det}\mathbf A = AB+BC+CA > 0$). And we can verify it directly $$ \begin{aligned} \mathbf {A \Xi + \Xi A}^T &= -2\int_0^\infty \left[ \frac{d}{dt} \left[\exp(-\mathbf A t) \right]\, \mathbf B \, \exp(-\mathbf A^T t) + \exp(-\mathbf A t) \, \mathbf B \, \frac{d}{dt} \exp(-\mathbf A^T t) \right] \, dt \\ &= -2\exp(-\mathbf A t) \, \mathbf B \, \exp(-\mathbf A^T t) \Bigg|_0^\infty = 2 \, \mathbf B. \end{aligned} $$
In summary, the distribution is a two-dimensional Gaussian, $$ P(\mathbf x) = \frac{1}{2 \pi \sqrt{\operatorname{det} \mathbf \Xi}} \exp\left( - \frac{1}{2} \mathbf x \mathbf \Xi^{-1} \mathbf x^T \right). $$ with the covariance matrix $\mathbf \Xi$ given by (3).
Edit. There is a simpler solution. Since the solution is a Gaussian centered at zero, we only need to determine the covariant matrix, which can done as follows.
Multiplying (1) by $x_r x_s$ and integrating over $dx_1 dx_2$, we get $$ -\int_{-\infty}^\infty\int_{-\infty}^\infty \left[ \sum_{j=1}^2 \left( x_s A_{rj}x_j + x_r A_{sj} x_j \right) +(B_{rs} + B_{sr}) \right]\, P \, dx_1 \, dx_2 = 0. $$ Since $\mathbf B$ is symmetric, this means $$ \sum_{j=1}^2 \left[ A_{rj} \operatorname{cov}(x_j, x_s) + \operatorname{cov}(x_r, x_j) A_{sj} \right] + 2 B_{rs} = 0, $$ which is just the component form as (2). Then the solution (3) follows.