solutions to nonhomogeneous system of differential equations with general solution already known

223 Views Asked by At

Let's say we have the general solution to $X' = A(t)X$, where $X=(x_1, x_2)^T$. How do you find the general solution to the system $X'= A(t)X + b(t)$ where $b(t)$ is a $2 \times 1$ matrix with two polynomials as entries. How do you find the particular solution?

2

There are 2 best solutions below

2
On

Let's call the two solutions differently, and clean up notation. Lowercase boldface will denote vectors ($2\times1$). Plain uppercase will denote matrices. Plain lowercase will be scalars. All explicit dependence on $t$ will be dropped. Now the one without any forcing term on the right hand side is denoted with $\mathbf{x}$:

$$ \mathbf{x}' = A\mathbf{x} $$

The one with forcing we'll denote by $\mathbf{y}$, $$ \mathbf{y}' = A\mathbf{y} + \mathbf{b}. $$

Let's guess a form for $\mathbf{y}$. Let's guess it is the product of $\mathbf{x}$ and some unknown scalar function $u$, $$ \mathbf{y} = u\mathbf{x} $$ Then your equation becomes $$ u\mathbf{x}'+u'\mathbf{x} = uA\mathbf{x} + \mathbf{b} $$ But we already know that $\mathbf{x}'=A\mathbf{x}$ so substitute that $$ uA\mathbf{x}+u'\mathbf{x} = uA\mathbf{x} + \mathbf{b} $$ Canceling on both sides we are left with $$ u'\mathbf{x} = \mathbf{b} $$ Multiplying by $\mathbf{x}^T$ $$ u'\mathbf{x}^T\mathbf{x} = \mathbf{x}^T\mathbf{b} $$ But $\mathbf{x}^T\mathbf{x}=||\mathbf{x}||^2$, so dividing by that scalar gives $$ u' = \frac{\mathbf{x}^T\mathbf{b}}{||\mathbf{x}||^2}, $$ and integrating and putting back all the explicit $t$ dependence gives $$ u(t) = \int\frac{\mathbf{x}^T(t)\mathbf{b}(t)}{||\mathbf{x}(t)||^2}dt $$ If you take that and multiply this by your original known solution, then I think you have a particular solution $y$.

1
On

The most general solution to

$X' = A(t)X \tag{1}$

is the fundamental matrix solution $\Phi(t, t_0)$; for any $t_0$͵ this is a time-dependent $2 \times 2$ matrix such that

$\Phi'(t, t_0) = A(t) \Phi(t, t_0) \tag{2}$

with

$\Phi(t_0, t_0) = I. \tag{3}$

Writing

$\Phi(t, t_0) = \begin{bmatrix} \phi_{11}(t, t_0) & \phi_{12}(t, t_0) \\ \phi_{21}(t, t_0) & \phi_{22}(t, t_0) \end{bmatrix}, \tag{4}$

we see that the colomns of $\Phi(t, t_0)$ are each themselves solutions of (1) with

$\begin{pmatrix} \phi_{11}(t_0, t_0) \\ \phi_{21}(t_0, t_0) \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \tag{5}$

and

$\begin{pmatrix} \phi_{12}(t_0, t_0) \\ \phi_{22}(t_0, t_0) \end{pmatrix} = \begin{pmatrix} 0 \\ 1\end{pmatrix}. \tag{6}$

If

$X(t_0) = \begin{pmatrix} x_1(t_0) \\ x_2(t_0) \end{pmatrix} \tag{7}$

and we set

$X(t) = \Phi(t, t_0) X(t_0), \tag{8}$

then we see that

$X'(t) = \Phi'(t, t_0) X(t_0) = A(t) \Phi(t, t_0) X(t_0) = A(t) X(t) \tag{9}$

and

$X(t_0) = \Phi(t_0, t_0) X(t_0) = IX(t_0) = X(t_0); \tag{10}$

we see that (8) is the solution to (1)satisfying the initial condition (7). The matrix $\Phi(t, t_0)$ is in fact possessed in linearly independent columns, since the same is true of its initial value $I$ (see (3)). (This is a standard result which may be found in many texts covering the theory of linear systems.) Thus we may regard $\Phi(t, t_0)$ as the most general solution of (1) possible; by (8) any $2 \times 1$ vector solution $X(t)$ is expressible as a linear combination of the columns of $\Phi(t, t_0)$; since the solution space is two dimensional, it is apropos to regard $\Phi(t, t_0)$ as most general solution of (1).

If we have such a $\Phi(t, t_0)$ at our disposal, we may find an expression for the solution of the inhomogeneous equation

$X'(t) = A(t)X(t) + b(t) \tag{11}$

by means of $\Phi(t, t_0)$ as follows: since the columns of the matrix $\Phi(t, t_0)$ are linearly independent, it is invertible and we have

$\Phi^{-1}(t, t_0) \Phi(t, t_0) = I; \tag{12}$

differentiating (12) with respect to $t$:

$(\Phi^{-1}(t, t_0))' \Phi(t, t_0) + \Phi^{-1}(t, t_0) \Phi'(t, t_0) = 0; \tag{13}$

using (2)

$(\Phi^{-1}(t, t_0))' \Phi(t, t_0) + \Phi^{-1}(t, t_0) A(t)\Phi(t, t_0) = 0; \tag{14}$

right multiplying by $\Phi^{-1}(t, t_0)$ and isolating $(\Phi^{-1}(t, t_0))'$:

$(\Phi^{-1}(t, t_0))' = - \Phi^{-1}(t, t_0) A(t). \tag{15}$

We use (15) together with (11) to evaluate $(\Phi^{-1}(t, t_0) X(t))'$, thusly:

$(\Phi^{-1}(t, t_0) X(t))' = (\Phi^{-1}(t, t_0))' X(t) + \Phi^{-1}(t, t_0) X'(t)$ $= -\Phi^{-1}(t, t_0) A(t) X(t) + \Phi^{-1}(t, t_0) (A(t) X(t) + b(t))$ $= -\Phi^{-1}(t, t_0) A(t) X(t) + \Phi^{-1}(t, t_0) A(t) X(t) + \Phi^{-1}(t, t_0) b(t) = \Phi^{-1}(t, t_0) b(t); \tag{16}$

we may integrate (16) 'twixt $t_0$ and $t$:

$\Phi^{-1}(t, t_0) X(t) - \Phi^{-1}(t_0, t_0) X(t_0)$ $= \int_{t_0}^t (\Phi^{-1}(s, t_0) X(s))' ds = \int_{t_0}^t \Phi^{-1}(s, t_0) b(s) ds; \tag{17}$

via (3) we obtain

$\Phi^{-1}(t, t_0) X(t) = X(t_0) + \int_{t_0}^t \Phi^{-1}(s, t_0) b(s) ds, \tag{18}$

whence

$X(t) = \Phi(t, t_0)(X(t_0) + \int_{t_0}^t \Phi^{-1}(s, t_0) b(s) ds). \tag{19}$

Formula (19) presents the general solution to (11) in terms of the general solution $\Phi(t, t_0)$ of (1), as per request. When $X(t_0) = 0$, we obtain the particular solution $X_p(t)$ associated with $b(t)$:

$X_p(t) = \Phi(t, t_0)\int_{t_0}^t \Phi^{-1}(s, t_0) b(s) ds. \tag{20}$

We note that when $b(t) = 0$, (20) yields the solution to the homogeneous equation (1); the solutions to (1) are thus seen to accomodate the initial condtions, whereas $X_p(t)$ arises soley from the "driving" term $b(t)$.

We can't really say much more without solving (5) for $\Phi(t, t_0)$, and this can be quite difficult even for fairly simple $A(t)$; of course, in the event that $A(t)$ is constant, we may write

$\Phi(t, t_0) = e^{A(t - t_0)}͵ \tag{21}$

but this is one of the few cases in which a solution is known a priori. In the event that the components of $b(t)$ are polynomials, we have

$b(t) = \sum_0^m t^i b_i, \tag{22}$ Tegral where the $b_i$ are constant vectors; then the integral occurring in (19), (20) may be evaluated one power of $t$ at a time, viz

$\int_{t_0}^t \Phi^{-1}(s, t_0) b(s) ds = \sum_0^m \int_{t_0}^t s^i \Phi^{-1}(s, t_0) b_i ds, \tag{23}$

but this still doesn't get us very far for general $A(t)$, though in the case of constant $A(t)$ such integrals are found in many tables.

It is worth noting that virtually everything we have said generalizes beyond the $2 \times 2$ case to systems of arbitrary (finite) dimension.

The above technique is classical, and occurs in many textbooks. I too think it is either the variation of parameters or undetermined coefficients method, but Iike my colleague in this question, rajb245, I can never remember which is which.

Phew! Finally done!

Hope this helps. Cheers!

And as ever,

Fiat Lux!!!