Galerkin method for nonlinear ode

840 Views Asked by At

I'm trying to solve the following differential equation: $$\frac{d^2u}{dx^2}=\frac{du}{dx}u+u^2+x$$ $$x \in \Omega=[0,1]$$ $$BCS:u|_{x=0}=1;\frac{du}{dx}|_{x=1}=1$$

You can see that the right side contains $u^2$. So when I paste it in the weighted residual form, I get nonlinear term. For example, if I have approximation:

$$ u=1+\sum_{i=1}^n\alpha_i x^i$$ There will be nonlinear integral in weighted residuals $$\int (1+\sum_{i=1}^n\alpha_i x^i)^2dx$$ That's why the system will be nonlinear. What am I missing?

I tried to switch from $u$ to $u^2$ in equation because $u\frac{du}{dx}=\frac{1}{2}\frac{du^2}{dx}$, but can't make it for $\frac{d^2u}{dx^2}$

Edit, according to the answer:

I won't write BCS integrals, because they don't make real sense in the question. I'll write only the integral in the main domain. So I have $$\int_0^1w(\frac{d^2u}{dx^2}-\frac{du}{dx}u-u^2-x)dx=0$$ $w-$weight function. Paste approximation of $u$. Let's take $n = 2$ $$\int_0^1w(2\alpha_2-(\alpha_1 + 2\alpha_2 x)(1+\alpha_1 x +\alpha_2 x^2)-(1+\alpha_1 x +\alpha_2 x^2)^2-x)dx=0$$ Take in account Bubnov-Galerkin approximation of weight function: $$ w=\beta_1x+\beta_2x^2$$ $$\int_0^1\beta_1x(2\alpha_2-(\alpha_1 + 2\alpha_2 x)(1+\alpha_1 x +\alpha_2 x^2)-(1+\alpha_1 x +\alpha_2 x^2)^2-x)dx +\int_0^1\beta_2x^2(2\alpha_2-(\alpha_1 + 2\alpha_2 x)(1+\alpha_1 x +\alpha_2 x^2)-(1+\alpha_1 x +\alpha_2 x^2)^2-x)dx=0$$ From here since $\beta_i $ arbitrary we have system

$$\begin{cases} \int_0^1x(2\alpha_2-(\alpha_1 + 2\alpha_2 x)(1+\alpha_1 x +\alpha_2 x^2)-(1+\alpha_1 x +\alpha_2 x^2)^2-x)dx =0\\ \int_0^1x^2(2\alpha_2-(\alpha_1 + 2\alpha_2 x)(1+\alpha_1 x +\alpha_2 x^2)-(1+\alpha_1 x +\alpha_2 x^2)^2-x)dx=0 \end{cases} $$

Here we exactly have unknowns only $\alpha_i;i=1,2$.But if we extend polynomial to $2n=4$ we will have new $\alpha_i;i=1..4$ with 2 equations only

Edit 2:

Actually I need two terms approximation, so I don't think that switching to 2n terms and then solving 2n equations is the key point. I suppose we should simplify ode, or choose another interpolation functions rather then $x^i$

2

There are 2 best solutions below

6
On

You missed nothing, the product is non-linear. However, you can extend your polynomial expansion with

$$\int (1+\sum_{i=1}^n\alpha_i x^i)^2dx\equiv\int (1 + \sum_{i=1}^{2n}\tilde{\alpha_i} x^i) dx.$$

The product of $u\cdot u$ is still a polynomial, however with a higher polynomial degree of $2n$.

The Galerkin solution can be derived by integrating $$\int (1 + \sum_{i=1}^{2n}\tilde{\alpha_i} x^i) dx$$ and considering only the first $n$ coefficients of $\tilde{\alpha_i}$. Simply spoken: Coefficients greater $n$ are neglected.

The truncation of the additional $n$ modes can be intepreted as projection in a $2n$ dimensional space onto a $n$ dimensional space where the solution is orthogonal to the chosen subspaces.

This is the key property of the Galerkin approach.

Regards

4
On

Considering instead the ODE

$$ u''+u'u+u^2-x=0\ \ \ \ \ \ \ (1) $$

with better behavior regarding the polynomial approximation, the Galerkin procedure can be handled as follows.

1 - Choosing a convenient orthogonal basis into the interval as for instance the shifted Tchebicheff polynomials $\theta_k$ in $[0,1]$ we make an approximating sequence as

$$ u_n(x) = \sum_{k=0}^n a_k \theta_k(x)\ \ \ \ \ \ \ (2) $$

2 - Calculate the residual $r_n(x,a_k)$ from $(1)$ after substitution of $(2)$

3 - Calculate the relationships

$$ g_i(a_k) = \int_0^1 r_n(x, a_k)\theta_i(x) dx, \ \ i = 1,\cdots, n $$

4 - Calculate the boundary conditions

$$ \cases{b_1(a_k) = u_n(0)-1\\ b_2(a_k) = u'_n(0)-1} $$

5 - Solve the minimization problem

$$ \min_{a_k}\sum_{i=0}^n g_i^2(a_k)\ \ \ \text{s. t.}\ \ \{b_1(a_k) = 0, b_2(a_k) = 0\} $$

Follows a MATHEMATICA script to illustrate that

t[x, 0] = 1;
t[x, 1] = x;
t[x_, k_] := t[x, k] = 2 x t[x, k - 1] - t[x, k - 2]
n = 4;
theta = Table[t[x, k], {k, 0, n}];
thetas = theta /. {x -> 2 y - 1};
u[x_] := Sum[Subscript[a, k]  thetas[[k]], {k, 1, n}]
A = Table[Subscript[a, k], {k, 1, n}]
d[u_, x_] := D[u, x, x] + D[u, x] u + u^2 - x
equs = Table[Integrate[d[u[y], y] thetas[[k]], {y, 0, 1}], {k, 1, n}];
bc1 = (u[y] /. {y -> 0}) - 1
bc2 = (D[u[y], y] /. {y -> 0}) - 1
sol = NMinimize[{equs.equs, bc1 == bc2 == 0}, A]
u0 = u[x] /. sol[[2]];
solux = NDSolve[{d[v[x], x] == 0, v[0] == v'[0] == 1}, v, {x, 0, 1}][[1]];
plot1 = Plot[Evaluate[v[x] /. solux], {x, 0, 1}, PlotStyle -> Red];
plot2 = Plot[u0, {y, 0, 1}];
Show[plot1, plot2]

Attached a plot showing in red the solution for $(1)$ and in blue for $n = 4$ the approximation.

enter image description here