The logic about the derivation for variational formulation

156 Views Asked by At

Consider the simplest equation: \begin{align} -\Delta u &= f\ \ \text{in} \ \ \Omega\\ u&=0 \ \ \text{on} \ \ \partial\Omega \end{align}

I think the natural way to derive the weak-solution is to treat the equation in the distributive sense: \begin{equation} -\Delta u = f \ \ \text{in}\ {L^2(\Omega)}' \end{equation}

Therefore we know: \begin{equation} \int_{\Omega} -\Delta u v = \int_{\Omega} f v, \ \ \forall v\in L^2(\Omega) \end{equation}

In all books, I see the next step is to do the integration by parts. This is just what I am wondering, because I don't can't figure out the motivation. If we choose to do the integration by parts, we can get

$$ \int_{\Omega} \nabla u \cdot \nabla v = <f,v>. $$

we have to restrict $u$ and $v$ to the $H^{1}$(all first-order weak derivatives $\in L^2$ ). Besides, it has no boundary, which means need a trace operator.

Q$1$: Why do we choose to do the integration by parts?

After the integration by parts, I can understand the weak formulation: Find u $\in H^1(\Omega)$ with the trace $\gamma u = 0$ (or equivalently $u\in H^1_0(\Omega)$), st $$ a(u,v) = <f,v> \forall v\in H^1(\Omega ) $$ where $a(u,v) = \int_{\Omega} \nabla u \cdot \nabla v$

Then...I feel frustrated about the form in the book: $$ a(u,v) = <f,v> \forall v\in H^1_0(\Omega ) $$

Q$2$:Why the test space is $H^1_0$ instead of $H^1$?

Maybe it's because these two test function spaces $H^1_0$ and $H^1$ have the same weak solution???

Hope for the illustration. THANKS A LOT!

2

There are 2 best solutions below

0
On
  1. We look for weak solutions in $H^1(\Omega)$ only. If you consider the form before, you have to justify that the terms are well-defined. Hence, you actually say that $u$ has to be in $H^2(\Omega)$ which is a smaller space.

  2. The case of $a:H \times H \to \mathbb{R}$ is in the setting of the famous Lax-Milgram lemma and you can immediately conclude whether your PDE is well-posed in a weak sense. There are also other theorem such as the Lions-Lax-Milgram theorem, which generalizes this and the solution function space can be different than the test function space. But this theorem is not as feasible to state, you have to check for inf-sup conditions now because coercivity in Lax-Milgram is formulated for such symmetric bilinear forms.

0
On
  1. From my understanding of finite element methods, it is said that if we use the strong form, we require a function, first derivative and second derivative. However, if we use the weak form we require only the first derivative. This is advantageous in the computational perspective since we can live with lower order functions such as piece-wise constant functions.
  2. A real physical problem has some boundary conditions which are part of the Question. No need to solve them. Meaning all the functions in the $H^1$ space are not admissible. Only those that satisfy the values at the boundaries of the domain need to be tested. Testing other functions is unnecessary? So $H^1_0$ is only a subset of $H^1$ which neglects functions that do not meet the boundary values. Lets say your $u(0)=0$, then your solution need not be of the form $a+bu(x)$. Although it can be, in which case $a=0$. So it is only a mathematical representation and gives more information about the problem at hand and does not affect the solution in anyway.