I am reading a paper that seems to provide a solution for the problem I am facing but being unfamiliar with variational calculus I get lost in notation.
I am trying to derive the weak form from the strong form in the following problem.
Solving for $u(t,x)$ for $ (t,x) \in [0,T] \times R^d $. The set $A \in R^d$ is open with boundary $\partial A$.
The strong form is as follows: $$ \frac{\partial u}{\partial t}(t,x) - \frac{1}{2} \sum_i \sum_j a_{ij}(x) \frac{\partial^2 u(t,x)}{\partial x_i \partial x_j} - \sum_i b_i(x) \frac{\partial u(t,x)}{\partial x_i} = 0\;,\; on \; (t,x) \in [0,T] \times A, $$ $$ u(0,x) = 1, \; x \in A, $$ $$ u(t,x) = 0, \; x \in \partial A,\; t > 0$$
The paper indicates that the weak form is as follows: $$ \frac{d u}{d t}(u(t,.),v) + g(u(t,.),v) = 0, \forall v \in H_0^1(A), $$ $$ u(0,.) = 1, $$ where $ g(u(t,.),v) = \frac{1}{2} (a \nabla u(t,.), \nabla v) - \left( (b-\text{div } a)\nabla u,v\right) $.
I assume $ (a,b) = \int_A a(x) b(x) dx $.
This seems to be a classical result, the issue is that I am not familiar with the notation, nor with tensor/variational calculus. I assume that it involves a multivariate integration by part, which is foreign to me.
- how to derive $g(u(t,.),v)$ ?
- what is the divergence of the matrix-valued $a$ ?
I get the part with $b$: $\int_A \left( \sum_i b_i(x) \frac{\partial u(t,x)}{\partial x_i} \right) v(x) dx = (b\nabla u,v)$.
The problem is the part with $a$.
Thanks a lot for any help !
Source of the problem:
P. Patie, C. Winter, (2008) "First exit time probability for multidimensional diffusions: A PDE-based approach"
I hope this is correct, there is still some uncertainty for some parts.
Focusing on: $$ \sum_i \sum_j \int a_{ij}(x) \partial_{ij} u(x) v(x) dx \tag{1} \label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) \partial_{i} u(x) v(x)]_{\partial A}=0$, since $v(x) = 0$ at the boundary. $$ \int a_{ij}(x) \partial_{ij} u(x) v(x) dx = [a_{ij}(x) \partial_{i} u(x) v(x)]_{\partial A} - \int ( \partial_j a_{ij}(x)v(x)+ a_{ij}(x)\partial_j v(x))\partial_i u(x)dx \\ = - \int \partial_j a_{ij}(x)v(x) \partial_i u(x) dx - \int a_{ij}(x)\partial_j v(x)\partial_i u(x)dx $$
Now we sum the two terms over $i$ and $j$.
For the first term we have: $$ \begin{align} \sum_i \sum_j \int \partial_j a_{ij}(x)v(x) \partial_i u(x) dx & = \int \left( \sum_i \left( \sum_j \partial_j a_{ij}(x)\right) \partial_i u(x)\right) v(x) dx \\ & = \int \left( \sum_i (\text{div } a)_i \partial_i u(x)\right) v(x) dx \\ & = (\text{div } a \nabla u,v) \end{align} $$ where $ (\text{div } a)_i = \left( \sum_j \partial_j a_{ij}(x) \right)_i $.
For the second term we have: $$ \begin{align} \sum_i \sum_j \int a_{ij}(x)\partial_j v(x)\partial_i u(x)dx & = \int \sum_j \left( \sum_i a_{ij}(x) \partial_i u(x) ) \right) \partial_j v(x) dx \\ & = ( a\nabla u , \nabla v) \end{align} $$
Plugging this in $(\ref{1})$, we get: $$ (1) = - (\text{div } a \nabla u,v) - ( a\nabla u , \nabla v). \tag{2} $$
Coming back to the original term, we wanted: $$ - \int \left( \frac{1}{2} \sum_i \sum_j a_{ij}(x) \partial_{ij} u(x) + \sum_i b_i(x) \partial_i u(x) \right) v(x) dx. \tag{3} $$ Using the fact that $ \int \sum_i b_i(x) \partial_i u(x) v(x) dx = (b\nabla u,v),\; $ and (2), we get: $$ \begin{align} (3) & = \frac{1}{2} (\text{div } a \nabla u,v) + \frac{1}{2} ( a\nabla u , \nabla v) - (b\nabla u,v) \\ & = \frac{1}{2} ( a\nabla u , \nabla v) - ( (b - \frac{1}{2} \text{div } a ) \nabla u, v ) \end{align} $$
This is different from what the paper gives, I have an additional "$\frac{1}{2}$" (who made an error ?): $$ \begin{align} (g) & = \frac{1}{2} ( a\nabla u , \nabla v) - ( (b - \text{div } a ) \nabla u, v ) \end{align} $$
So I see that it mostly works if we have a nice bounded set such as $A=\{l_i<x_i<u_i\;;i=1,...,d\}$.
But I am not sure what happens if it is unbounded on one side: $A=\{x_i<u_i\;;i=1,...,d\}$.
Or worse, it is a weird boundary: $A=\{x_i-x_j<u_i\;;i,j=1,...,d\}$. Feel free to comment, I will edit the answer accordingly.