Can this problem be solved with eigenfunction expansion? Thought it could, then things got weird.

140 Views Asked by At

I am being asked to determine whether the following problem

$\begin{align} u_{xyy}(x,y) + u_{xxy}(x,y)=0, && 0<x,\, y<1 \\ u(x,0)=u(x,1)=0, && 0<x<1 \\ u(0,y)=f(y),\,u(1,y) = 0, && 0<y<1 \end{align}$

can be solved by eigenfunction expansion.

Although it doesn't explicitly state them as sufficient conditions which a pde must have in order to be solved in this way, my book does say that every problem that can be solved by eigenfunction expansion shares the following two properties:

  1. It is separable; i.e., $\exists$ a solution of the form $u(x,y) = X(x)Y(y)$.

  2. The domain where the equation is satisfied is a bounded set that is a coordinate cell for some coordinate system (i.e., a bounded set is a coordinate cell in the $\alpha \beta$ coordinate system if it is of the form $\{ (\alpha,\beta): a < \alpha < b, \, c< \beta < d \}$ where $a,b,c,$ and $d$ denote finite constants.

Now, as for Property 2, it doesn't seem as though the domain $0 < x,\,$ $y < 1$ qualifies as a coordinate cell, since it is not bounded from below or from the right, but there are other problems in my book that have the same domain and are solvable via eigenfunction expansion.

For example, the problem

$\begin{align}u_{xx}(x,y)+u_{yy}(x,y) = 0 &&\text{for}\, 0<x,\, y<1 \\ u_{x}(0,y)=0, \, u(1,y)=0, && 0<y<1 \\ u(x,0)=1, \, u_{y}(x,1)=0, && 0<x<1\end{align}$

is solvable by eigenfunction expansion, and the solution given in the back of the book is

$\displaystyle u(x,y) = \frac{4}{\pi}\sum_{n=1}^{\infty} (-1)^{n+1} \frac{\displaystyle \cosh\left[\left(n-\frac{1}{2} \right)\pi \left( 1 - x \right)\right]}{2n-1}\cos \left(n - \frac{1}{2} \right)\pi y$

After noticing this, then I decided to see if the problem satisfied Property 1, and it does:

Using the solution form $u(x,y) = X(x)Y(y)$, I found $u_{xyy}(x,y) = X^{\prime}(x)Y^{\prime\prime}(y)$ and $u_{xxy}(x,y) = X^{\prime\prime}(x)Y^{\prime}(y)$. Then, I substituted these partials into the equation to obtain $X^{\prime}(x)Y^{\prime\prime}(y) + X^{\prime\prime}(x)Y^{\prime}(y) = 0 $, and then finally, $\displaystyle \frac{X^{\prime\prime}(x)}{X^{\prime}(x)} = \frac{-Y^{\prime\prime}(y)}{Y^{\prime}(y)}$. Since the LHS depends only on $x$ and the RHS depends only on $y$, the equation is clearly separable.

However, once I got started trying to solve the problem, things got wonky:

In order for $\displaystyle\frac{X^{\prime\prime}(x)}{X^{\prime}(x)} = \frac{-Y^{\prime\prime}(y)}{Y^{\prime}(y)} $ to remain true as $x$ and $y$ range over all possible values for $x$ and $y$ in the domain, each side needs to equal the same constant, say $\mu$.

Therefore, our PDE has reduced to the set of two ODEs:

$\begin{align} -Y^{\prime\prime}(y) = \mu Y^{\prime}(y) \end{align}$

-and-

$\begin{align} X^{\prime\prime}(x) = \mu X^{\prime}(x). \end{align}$

Then, after substituting $u(x,y) = X(x)Y(y)$ into the boundary conditions, we obtain that:

$\begin{align} X(x)Y(0) = 0 && X(x)Y(1) = 0, && 0<x<1 \\ X(0)Y(y) = f(y) && X(1)Y(y) = 0, && 0<y<1 \end{align}$

One way for the first row of new boundary conditions to be satisfied is for $X(x) = 0$ $\forall x$, but that would contradict the second row. A second way for the first row to be satisfied is for $Y(0) = Y(1) = 0$, and since this does not contradict the second row, we accept this possibility.

Now, we get to the part where everything went to hell: I went to tackle the $Y$ equation

$\begin{align}-Y^{\prime\prime}(y) = \mu Y^{\prime}(y), && Y(0) = Y(1) = 0, \end{align}$

but quickly realized it is not a Sturm-Liouville problem, because of the $Y^{\prime}(y)$ term. The $X$ equation has the same issue.

The reasons why the book stated that separability and a domain that is a bounded coordinate cell were necessary for a problem to be solved by eigenfunction expansion was because those properties were necessary for the equation to be reduced to a Sturm-Liouville problem. Without the Sturm-Liouville form, it says, there can be no eigenfunction expansion.

Does this mean that this problem cannot be solved by eigenfunction expansion? And if so, what "warning signs" should I have noticed at the beginning that would have told me this? The third-order partial derivatives instead of just second-order? Because neither of the two things that the textbook told me to look out for warned me that this could happen...

Thanks.