Solving a pde on the unit square

268 Views Asked by At

I'm having trouble solving for $u(x,y)$ in the following pde on the unit square: $$ - \nabla^2u = x(1-x) ,\;\; \text{for } 0 < x < 1, 0<y<1, \\ u(x,0) = 1 \text{ and } u(x, 1) = 2,\;\; \text{for } 0 < x < 1, \\ \frac{\partial u}{\partial x} (0,y) = 0 = \frac{\partial u}{\partial x} (1,y) ,\;\; \text{for } 0<y<1. $$

In my textbook's solutions, they did the following:

"Make a substitution with $w = 1 + y$. Then let $u = u* + w$. So we can now solve the following boundary value problem: $$ - \nabla^2u* = x(1-x) ,\;\; \text{in } \Omega, \\ u* = 0,\;\; \text{on } \Gamma_{D}, \\ \frac{\partial u}{\partial x} = 0 ,\;\; \text{on } \Gamma_{N}. $$ " and then they proceeded to solve for u* after solving the corresponding eigenproblem and using fourier series to find the coefficient in the series solution.

I would be grateful if someone could explain why they made this substitution (and what lead to their choice of $w = 1 + y$?) and how this resulted in that boundary value problem. Thanks in advance.

2

There are 2 best solutions below

2
On BEST ANSWER

The substitution is just exploiting a convenience. I find it actually quite surprising how many seemingly difficult PDEs can be reduced to something much more manageable (heat, wave, potential equations) with a simple transformation of coordinates, translation of solution, etc.

The only point of this substitution was to make data on the boundary zero. So why is it important that the boundary be zero? The eigenvalue method relies on the fact that if $ab = 0$ for two numbers $a,b$, then either $a \equiv 0$ or $b \equiv 0$ (or both of course). Both $a = b = 0$ is not interesting in the case of separation of variables because we want to avoid trivial solutions if possible. This is how you pick the boundary values for each of the ODEs after separating the equation leading to a unique solution of the original problem.

Finally, this choice of substitution is nice because the polynomial $1 + y$ is harmonic, so it doesn't change the RHS of the PDE itself.

That's great in words, but it's better to see mathematically. Just inserting the substitution directly in, we have $$ -\Delta u = -\Delta (1 + y + u^*) = -\Delta u^*. $$ And on the boundary \begin{cases} u(x,0) = 1 + \underbrace{u^*(x, 0)}_{0} = 1 \\ u(x,1) = 1 + 1 + \underbrace{u^*(x,1)}_{0} = 2 \\ u_x = u^*_x. \end{cases} So the new equation in $u^*$ is \begin{cases} -\Delta u^* = x(1-x) \ \ \text{in} \ \Omega \\ u^*(x,0) = u^*(x,1) = 0 \ \ \text{for} \ x \in (0,1) \\ u^*_x(0,y) = u^*_x(1,y) = 0 \ \ \text{for} \ y \in (0,1) \end{cases} which is easily solvable.

0
On

The idea is that, since we prefer having trivial initial data on $\Gamma_D$ (due to the Fourier transform method), we want to have the following decomposition of the solution $u$: \begin{align*} u=u^*+w, \end{align*} where \begin{align*} -\nabla^2u^*=x(1-x) \quad &in \quad \Omega\\ u^*=0 \quad &on \quad \Gamma_D\\ \frac{\partial u^*}{\partial x}=0 \quad &on \quad \Gamma_N \end{align*}

and

\begin{align*} -\nabla^2w=0 \quad &in \quad \Omega\\ w(x,0)=1 & \\ w(x,1)=2 & \\ \frac{\partial w}{\partial x}=0 \quad &on \quad \Gamma_N. \end{align*}

One could verify that $u=u^*+w$ satisfies the original system of equations.

Apparently $w=1+y$ is the solution satisfying the above restrictions. ($w=a+bx+cy$ is one of the simplest examples of harmonic function, which works here. )