Exponential populations that depend upon each other

58 Views Asked by At

I have a question about how to solve an exponential problem that involves two populations, each which depends on the other.

For example, let's say we have an initial population of $h$ humans that increases by $H$ percent each year. And let's say we have an initial population of $d$ dragons which increases by $D$ percent each year. Each dragon kills $K_h$ humans a year. And every human kills $H_d$ dragons each year.

How can I figure out, in this obviously fictional example, how many dragons or humans there are after $t$ years?

Please note: I am not particularly advanced in math; I apologize if this question is either trivial or nonsensical. Please bear with me.

1

There are 1 best solutions below

4
On BEST ANSWER

So as I mentioned in the comments, this is a system of ordinary differential equations. It so happens that this is probably the easiest class of systems of ODEs to solve. But I'll need to work you up to it.

First, let's express your word problem as an equation. Let's call the current number of humans $x$ and the current number of dragons $y$. Note, that both of these variables are actually functions of time. So fully, we have $x(t)$ and $y(t)$. We also have a way to express the change in $x$ over time, which can be notated as either $x'$ or $\frac{\mathrm{d}x}{\mathrm{d}t}$. I will be using $x'$ throughout for simplicity.

We know that the change in the human population is related to some constant times the current number of humans minus some constant times the current number of dragons. Similarly for dragons. So we have

\begin{align*} x' &= H x - K_h y \\ y' &= D y - K_d x \end{align*}

We can rewrite this as a vector equation. First we can let

$$ \vec{x} = \left[\matrix{x \\ y}\right] $$

Matrix multiplication of a matrix and a vector operates like this:

$$ \left[\matrix{a & b \\ c & d}\right] \left[\matrix{x \\ y}\right] = \left[\matrix{ax + by \\ cx + dy}\right] $$

ie, the member of the first row of the matrix gets multiplies by the first column of the vector (only column) and becomes the first row, first column of the resulting vector. Similarly for the second row and first column.

So we can rewrite this system of equations as a single vector valued equation.

\begin{align*} \vec{x'} &= A \vec{x} \\ A &= \left[\matrix{H & -K_h \\ -K_d & D}\right] \end{align*}

A basic (non-vector) equation of this form has a solution that is given by the following. (Please excuse the abuse of notation. This is a sloppy justification, not actually a proof.)

\begin{align*} x' &= a x \\ \frac{\mathrm{d}x}{\mathrm{d}t} &= a x \\ \frac{\mathrm{d}x}{x} &= a \, \mathrm{d}t \\ \int \frac{\mathrm{d}x}{x} &= \int a \, \mathrm{d}t \\ \ln(x) &= at + C \\ x &= C\mathrm{e}^{at} \end{align*}

Or, as you suspected, an exponential. So we will make a guess that our solution is going to be in this form (ansatz). And then take the derivative and compare.

\begin{align*} \vec{x}(t) &= \vec{\eta} \mathrm{e}^{rt} \\ \vec{x'}(t) &= r \vec{\eta} \mathrm{e}^{rt} \\ \end{align*}

However, we already know that $\vec{x'}$ has another value. Specifically, our original differential equation.

$$ \vec{x'} = A \vec{x} = A \vec{\eta} \mathrm{e}^{rt} $$

This demands that $A \vec{\eta} = r \vec{\eta}$ must be true. The form of this equation is known as the "eigenvalue" problem. Where a matrix is transformed into a scalar when multiplied by a particular vector. The scalar produced is said to be an "eigenvalue" of the matrix, and the vector used is an "eigenvector". Each matrix will have a certain number of eigenvalues and eigenvectors. An $n \times n$ matrix will have up to $n$. So we have two eigenvalues and eigenvectors possible for this problem.

I will be a little light on justification here, because learning about eigenvalues and vectors is involved and there are much better resources out there. Essentially, you take the determinant of $A - rI$, set it equal to zero, and then solve the resulting equation for $r$. It's not very enlightening to give an exact expression for the solution in our case, because it will be a bit complicated to tease out useful information. Instead let's just assume I have the eigenvalues $r_1$ and $r_2$ and the eigenvectors $\eta_1$ and $\eta_2$ after I have performed this process with concrete values. Now I have a few possibilities.

  1. My two eigenvalues are distinct and real.
  2. My two eigenvalues are complex conjugates of each other.
  3. My two eigenvalues are the same number.

These are the same possibilities of the roots of a regular quadratic equation, of course (because we solved one to get the eigenvalues).

(There is another condition that we have to satisfy, called the Wronskian, but I'll leave that for another day.)

For real and distinct eigenvalues our solution is just a linear combination of the two versions of our ansatz $\vec{\eta} \mathrm{e}^{r t}$, or explicitly.

$$ \vec{x}(t) = c_1 \vec{\eta_1} \mathrm{e}^{r_1 t} + c_2 \vec{\eta_2} \mathrm{e}^{r_2 t} $$

For complex eigenvalues, we get the exact same thing, however, because of Euler's formula, we can and generally do rewrite the equation as a sine cosine pair instead (with an associated exponential representing the real part of the complex conjugate pair). I'll let you mess around with it if you choose values in this region.

For repeated eigenvalues, we have to come up with another ansatz. Fortunately, simply multiplying a $t$ to the exponential will also give a valid solution. I'll leave it to you to verify this fact by taking the derivative yourself. Overall you'll get:

$$ \vec{x}(t) = c_1 \vec{\eta_1} \mathrm{e}^{r t} + c_2 t \vec{\eta_2} \mathrm{e}^{r t} $$

Note the extra factor of $t$ in this, also we no longer have $r_1$ or $r_2$ just $r$.

And that's the general outline of how you would solve a linear system of ordinary differential equations.