How to proceed in this Boundary value problem where Eigen values are calculated numerically?

186 Views Asked by At

While solving a boundary value problem (background provided in the Context section) I reach the following variable separated two equations ($F(x)$ and $G(y)$)

\begin{eqnarray} \lambda_h F''' - 2 \lambda_h \beta_h F'' + \left( (\lambda_h \beta_h - 1) \beta_h - \mu \right) F' + \beta_h^2 F &=& 0, \tag 1\\ V \lambda_c G''' - 2 V \lambda_c \beta_c G'' + \left( (\lambda_c \beta_c - 1) V \beta_c + \mu \right) G' + V \beta_c^2 G &=& 0,\tag 2 \end{eqnarray} with some separation constant $\mu \in \mathbb{R}$.

with boundary conditions as:

For G: $G'(0)=0, G(0)=0$ and $\frac{G''(1)}{G'(1)}=\beta_c$

For F: $F'(0)=0$ and $\frac{F''(1)}{F'(1)}=\beta_h$

The non-homogeneous condition in $F$ is: $\beta_h e^{-\beta_c y}G'(y)F(0)=1$

Since the boundary conditions for $(2)$ are all homogeneous, I have calculated the eigenvalues $\mu_i$ for $(2)$ numerically. For some realistic parameters like βc = 0.921, λc = 1.775*10^-4, V=1, these eigen values are

0.834041, 0.845661, 0.864286, 0.888675, 0.916951, 0.94696, 0.977271, 1.0079, 1.03972, 1.07361, 1.11015,...

Now since these eigenvalues are numeric in nature, I cannot figure out how to move forward with this problem.

I know from standard PDE problems that these eigen values should be utilized to build the $G$ solution and then should be employed in the $F$ non-homogeneous condition to determine the constants. But how am I supposed to even use orthogonality here ? I would really appreciate if someone could give a step-wise way forward for such problems where the EVs are numeric.


Context

I had the following system of PDEs $$\frac{\partial \theta_h}{\partial x}+\beta_h (\theta_h-\theta_w) = 0 \tag A$$

$$\frac{\partial \theta_c}{\partial y} + \beta_c (\theta_c-\theta_w) = 0 \tag B$$

$$\lambda_h \frac{\partial^2 \theta_w}{\partial x^2} + \lambda_c V\frac{\partial^2 \theta_w}{\partial y^2}-\frac{\partial \theta_h}{\partial x} - V\frac{\partial \theta_c}{\partial y} = 0 \tag C$$

with the boundary conditions ($\beta_h, \beta_c, V, \lambda_h, \lambda_c$ are constants)

$$\theta_w(0,y)=1, \theta_w(x,0)=0$$ $$\frac{\partial \theta_w(1,y)}{\partial x}=\frac{\partial \theta_w(x,1)}{\partial y}=0$$

$$\theta_h(0,y)=1, \theta_c(x,0)=0$$ Using the transformation $\theta_{h1}(x,y):=\theta_h (x,y)-1$ (so that $\theta_w(0,y)=0$ which is needed to get an additional homogeneous condition on $F$) where

\begin{eqnarray} \theta_{h1}(x,y) &=& \beta_h e^{-\beta_h x} \int e^{\beta_h x} (\theta_w(x,y)-1) \, \mathrm{d}x,\\ \theta_c(x,y) &=& \beta_c e^{-\beta_c y} \int e^{\beta_c y} \theta_w(x,y) \, \mathrm{d}y. \end{eqnarray}

which are substituted in $(C)$ to get:

$$\lambda_h \frac{\partial^2 \theta_w}{\partial x^2} + \lambda_c V \frac{\partial^2 \theta_w}{\partial y^2} +( -\beta_h - V \beta_c )\theta_w +\beta_h^2 e^{-\beta_h x} \int e^{\beta_h x} \theta_w(x,y) \mathrm{d}x + \beta_c^2 e^{-\beta_c y}\int e^{\beta_c y} \theta_w(x,y)\mathrm{d}y = 0 \tag D$$

\begin{eqnarray} \rightarrow 0 &=& e^{-\beta_h x} \left( \lambda_h e^{\beta_h x} \frac{\partial^2 \theta_w}{\partial x^2} - \beta_h e^{\beta_h x} \theta_w + \beta_h^2 \int e^{\beta_h x} \theta_w \, \mathrm{d}x \right) +\\ && + V e^{-\beta_c y} \left( \lambda_c e^{\beta_c y} \frac{\partial^2 \theta_w}{\partial y^2} - \beta_c e^{\beta_c y} \theta_w + \beta_c^2 \int e^{\beta_c y} \theta_w \, \mathrm{d}y \right). \tag E \end{eqnarray}

Using the ansatz $\theta_w(x,y) = e^{-\beta_h x} f(x) e^{-\beta_c y} g(y)$ on $(E)$ we obtain

$$\Bigg(\frac{\lambda_h (f^{''}-2\beta_h f^{'}+\beta_h^2 f)}{f} - \beta_h + \frac{\beta_h^2 \int f}{f}\Bigg) = \\ -V\Bigg(\frac{\lambda_c (g^{''}-2\beta_c g^{'}+\beta_c^2 g)}{g} - \beta_c + \frac{\beta_c^2 \int g}{g}\Bigg) = \mu \tag F $$

Each term on $(F)$ is a function of either only $x$ or only $y$ which is only possible if they are equal to a constant $\mu$

Thus taking:

$$ \Bigg(\frac{\lambda_h (f^{''}-2\beta_h f^{'}+\beta_h^2 f)}{f} - \beta_h + \frac{\beta_h^2 \int f}{f}\Bigg) = \mu \tag i $$

On solving $i$, and using $F(x) := \int f(x)$ to remove the integral we get:

$$ \lambda_h F''' - 2 \lambda_h \beta_h F'' + \left( (\lambda_h \beta_h - 1) \beta_h - \mu \right) F' + \beta_h^2 F = 0, \tag 1\\ $$

Similar manipulations for $G$ gives $(2)$

2

There are 2 best solutions below

10
On

here's an attempt at solution with slightly different path than yours. I was taught to first solve all of the equations in a general form, and only care about boundary conditions at the very end when fixing constants. I would start directly with of separation of variables

$$ \begin{eqnarray} \theta_w (x,y) &=& X_w(x)Y_w(y) \\ \theta_h (x,y) &=& X_h(x)Y_h(y) \\ \theta_c (x,y) &=& X_c(x)Y_c(y) \end{eqnarray} $$

Substituting these into the original PDE we get

$$ \begin{eqnarray} \dot{X}_h Y_h + \beta_h (X_h Y_h - X_w Y_w) &=& 0 \tag{A} \\ X_c \dot Y_c + \beta_c (X_c Y_c - X_w Y_w) &=& 0 \\ \lambda_h \ddot{X}_w Y_w + \lambda_c V X_w \ddot Y_w - \dot{X}_h Y_h - V X_c \dot{Y}_c &=& 0 \end{eqnarray} $$

Rearranging the terms so that X's are on one side, and Y's are on the other, we get the following for the first two equations

$$ \begin{eqnarray} \frac{\dot{X}_h}{X_h} + \beta_h &=& \beta_h \frac{X_w}{X_h} \frac{Y_w}{Y_h} \\ \frac{\dot{Y}_c}{Y_c} + \beta_c &=& \beta_c \frac{X_w}{X_c} \frac{Y_w}{Y_c} \end{eqnarray} $$

Since LHS only depends on one variable, and RHS depends on both, we conclude that

$$ \begin{eqnarray} Y_w &=& \eta_h Y_h \\ X_w &=& \eta_c X_c \end{eqnarray} $$

Where $\eta$'s are some proportionality constants. With this in mind, the first two equations become

$$ \begin{eqnarray} \dot{X}_h + \beta_h X_h &=& \beta_h \eta_h X_w \tag B\\ \dot{Y}_c + \beta_c Y_c &=& \beta_c \eta_c Y_w \end{eqnarray} $$

Rearranging the terms in the 3rd equation, and substituting the two identities we have just found, we arrive at

$$ \begin{eqnarray} \lambda_n \ddot{X}_w - \mu X_w &=& \dot{X}_h \tag C\\ \lambda_c \ddot{Y}_w + \frac{\mu}{V} Y_w &=& \dot{Y}_c \\ \end{eqnarray} $$

where $\mu$ is a separation constant (you called it eigenvalue) in your solution.

Now, we could proceed as you suggest, solving the first two equations using an integrating factor. I have tried, but it gets messy. Instead, why don't we try to eliminate w's from the equations. This way, we still have to solve 3rd order ODE's, but at least our constraints are still very simple. Substituting $X_w$ and $Y_w$ into the last equation, we get

$$ \begin{eqnarray} \lambda_n (\dddot{X}_h + \beta_h \ddot{X}_h) - \mu (\dot{X}_h + \beta_h X_h) &=& \beta_h \eta_h \dot{X}_h \\ \lambda_c (\dddot{Y}_c + \beta_c \ddot{Y}_c) + \frac{\mu}{V} (\dot{Y}_c + \beta_c Y_c) &=& \beta_c \eta_c \dot{Y}_c \\ \end{eqnarray} $$

From here on it gets messy, so I will only outline the further solution, not do it fully. We will solve each of the linear 1st order ODE's using ansatz $X_h = e^{a_h x}$ and $Y_c = e^{a_c y}$. This will give you a 3rd order characteristic polynomial, which you (maybe) will solve for 3 roots that satisfy it. Thus, your solutions will be

$$ \begin{eqnarray} X_h &=& A_h e^{a_h x} + B_h e^{b_h x} + C_h e^{c_h x} \\ Y_c &=& A_c e^{a_c y} + B_c e^{b_c y} + C_c e^{c_c y} \end{eqnarray} $$

where the lower-case constants are the roots of your polynomial, that is, functions of $\mu$ and $\beta$'s, $\lambda$'s and $\eta$'s, and the upper-case constants are still unknown. For each equation, you can eliminate one of the 3 capital constants by applying the corresponding boundary condition.

Once this has been done, you can substitute $X_h$ and $Y_c$ into (B), obtaining expressions for $X_w$ and $Y_w$. Then, you can apply the remaining 4 boundary conditions to $X_w$ and $Y_w$, eliminating the remaining 4 capital constants and potentially discretizing the values of $\mu$. Further, you can substitute all of the resulting equations into (A) in order to eliminate $\eta$'s

I expect the final solution to be in the form of series

$$ \begin{eqnarray} \theta_w(x,y) &=& \sum_\mu \alpha_\mu X_w(x, \mu) Y_w(y, \mu) \\ \theta_h(x,y) &=& \sum_\mu \alpha_\mu X_h(x, \mu) Y_w(y, \mu) \\ \theta_c(x,y) &=& \sum_\mu \alpha_\mu X_w(x, \mu) Y_c(y, \mu) \end{eqnarray} $$

I think that the series constants $\alpha_\mu$ will have to be the same for all 3 equations for this to work.

8
On

I've been thinking a bit about your problem, and I think I know of another approach that has high likelihood of working. It will be a bit of a pain, but I don't see any impossible steps in it.

  1. Represent each of the three functions using 2D Fourier series
  2. Observe that all equations are linear
    • Thus there is no frequency coupling
    • Thus for every pair of frequencies $\omega_x$, $\omega_y$ there will be a solution from a linear combination of only those terms
  3. Apply boundary conditions directly to each of the three series
    • Note that by orthogonality the boundary condition has to apply to each term of the fourier series
    • Perhaps here your substitution for $\theta_h' = \theta_h-1$ will be good, because
  4. Plug in Fourier series into PDE and solve coefficient matching (see here for example in 1D). Make sure you treat separately the cases when one or both of the frequencies are zero.
  5. If you consider all equations for a given frequency pair, you can arrange them into an equation $M\alpha = 0$, where $\alpha$ are fourier coefficients for that those frequencies, and $M$ is a small sparse matrix (sth like 12x12) that will only depend on the constants.
  6. For each frequency, the allowed solutions will be in the Null space of that matrix. In case you are not able to solve for the null space analytically, it is not a big deal - calculating null space numerically is easy, especially for small matrices.