solve integral equation using adomian decomposition.

169 Views Asked by At

i am trying to solve a few integral equation problems prior to the exams. This particular one, however, doesn't to converge. or am I going about it the wrong way?

The equation:

$u(x) = 1 - x^2 - \int_0^x(x-t)u(t)dt$

First i assigned; $u_0(x) = 1 - x^2$ then $u_1(x) = \int_o^x(x-t)u_1(t)dt$ where $u_1(t) = u_1(x)$ - just to avoid confusing the variables.so $u_1(x) =\int_o^x(x-t)(1-t^2)dt$ So i used the recursive formula:

$u_{n+1}(x) = -\int_o^x(x-t)u_n(t)dt$ and i got the following results

$u_0(x) = 1-x^2, u_1 (x) = {x^4\over12} - {x^2\over2}, u_2 = {-x^6\over360} +{x^4\over24}, u_3 = {x^8\over20160} - {x^6\over720} ... $

Now am stuck here. I dont know what to since getting the limit of $\sum_{n=0}^\infty (u_n)$ doesnot seem to have a "reasonable"value. Any suggestions and help will be highly appreciated.

1

There are 1 best solutions below

6
On BEST ANSWER

The method using successive approximation you tried works, but takes a slightly different form. The iteration is $u_0(x)=1-x^2$ and $$ u_{n+1}=1-x^2-\int_0^t (x-t)u_{n}(t)dt,\qquad n=0,1,2,\cdots.$$ Then $u_n$ converges to the true solution $u$ as $n$ goes to infinity (not the sum $\sum u_n$).

There are several ways to solve this integral equation. The simplest one is to convert the integral equation into a differential equation by taking derivatives on both sides. If you take derivative once, you get $$ u'(x)=-2x-\int_0^x u(t)dt.$$ Taking derivative once again, $$ u''(x)=-2-u(x).$$ The solution is given by $u(x)=-2+A\cos x+ B\sin x$ with some constants $A$ and $B$. Substituting this solution back into the integral equation, we get $$ -2+A\cos x+ B\sin x = 1-A-Bx+A\cos x+B\sin x,$$ which implies that $A=3$ and $B=0$.

The second way is to use Laplace transform $U(s)=\int_0^\infty u(x)e^{-xs}dx$. Then the original integral equation is transformed into the algebraic equation $$ U(s) = \frac{1}{s} - \frac{2}{s^3} - \frac{U(s)}{s^2},$$ or $U(s)=\frac{s^2-2}{s(s^2+1)}$. The solution can be expanded as $\frac{3s}{s^2+1}-\frac{2}{s}$, whose inverse Laplace transform can be obtained from tables, which is $ 3\cos x-2$.


adomian decomposition

Assuming the solution take the form $u=u_0+u_1+\cdots$, by taking $u_0(x)=1-x^2$ and $u_{n+1}(x)=-\int_0^x (x-t)u_n(t)dt$. It is easy to see that $u_n$ takes the form $$ u_n(x) = a_nx^{2n}+b_nx^{2n+2}$$ with $a_0=1$ and $b_0=-1$.

Using the relation $u_{n+1}(x)=-\int_0^x (x-t)u_n(t)dt$, we get $$ a_{n+1}x^{2n+2}+b_{n+1}x^{2n+4} = -\int_0^x (x-t)\left( a_nt^{2n}+b_nt^{2n+2}\right)dt =-\frac{1}{2}\frac{1}{(2n+1)(n+1)}a_nx^{2n+2} -\frac{1}{2}\frac{1}{2}\frac{1}{(2n+3)(n+2)}b_nx^{2n+4}. $$ This implies the recursive equations for the coefficients, that is $$ a_{n+1} = -\frac{1}{2}\frac{1}{(2n+1)(n+1)}a_n,\qquad b_{n+1} = -\frac{1}{2}\frac{1}{2}\frac{1}{(2n+3)(n+2)}b_n. $$ Then general form is then $$ a_n = \left(-\frac{1}{2}\right)^n \frac{1}{n!(2n-1)!!},\qquad b_n = -\left(-\frac{1}{2}\right)^n \frac{1}{(n+1)!(2n+1)!!}, $$ where $n!=1\cdot 2 \cdots n$ is the factorial and $(2n+1)!!=1\cdot 3 \cdots (2n-1)\cdot (2n+1)$ is the double factorial. Because the coefficients $a$ and $b$ decays like factorial, you can show that the sum $\sum u_n$ converges for any $x$.