how to develop positive solution space for difference equations?

77 Views Asked by At

this is set of equations I am working on the following equations:

D(t)=D(t-1)+(1-P(t-1)/P'*alpha)
P(t)=P(t-1)-(1-D(t-1)/D'*beta)+(1-R(t-1)/R'*gamma)
R(t)=R(t-1)+(1-D(t-1)/D'*eta)

where D is demand, P; Price, R;Resources and D',P',R' are the constants of can say maximums of respective and alpha beta are conversion ratios. what I an trying is to solve this set of equations but the resultant graph I get goes in negative values. the values I selected for initial condition are the graph is a diverging graph and the values are D(o)=5;P(o)=5;R(o)=5; ratios are all 1. and maximum values are all 10 but the graph diverges to large values in both positive and negative sizes? Is it possible that these values move around the maximums? I cant show thee graph because I am beginner on this site.

2

There are 2 best solutions below

12
On BEST ANSWER

With the particular values you have chosen, you are using the system of equations \begin{align*} D(t) &= D(t-1) + \left( 1 - \frac{P(t-1)}{10} \cdot 1 \right) \\ P(t) &= P(t-1) - \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) + \left( 1 - \frac{R(t-1)}{10} \cdot 1 \right) \\ R(t) &= R(t-1) + \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) \end{align*}

With this system, $D(43) = -0.55672\dots$ is the first negative.

Mathematica graphics

How does this happen? Once demand exceeds 100%, resources decrease and price begins to rise precipitously. Eventually price is high enough to inhibit demand and demand decreases. However, at this point, the price is so high, demand is still in excess of 100%, and resources are somewhat depleted, so the price continues to rise for some time, producing a large positive overshoot. During this large overshoot, demand plummets to be negative.

Why does this happen? There is nothing to prevent demand exceeding 100%. In particular, $D(11) = 10.3375\dots$. Nor does anything else about this system prevent any of $D$, $P$, or $R$ exceeding $10$ or becoming negative.

We can write your system as $$ \begin{pmatrix} D(t) \\ P(t) \\ R(t) \\ 1 \end{pmatrix} = \begin{pmatrix} 1 & \frac{-1}{10} & 0 & 1 \\ \frac{1}{10} & 1 & \frac{-1}{10} & 0 \\ \frac{-1}{10} & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} D(t-1) \\ P(t-1) \\ R(t-1) \\ 1 \end{pmatrix} $$ We can jump along by as many steps as we like by multiplying by powers of this square matrix. However, this square matrix has four eigenvalues, with magnitudes parts $1.04\dots$, $1.04 \dots$, $1$, and $0.93\dots$. Consequently, some of the entries in the successive powers of this matrix will grow without bound. In other words, this system is unstable. (The two eigenvalues with magnitudes greater than $1$ are a complex conjugate pair, so we should expect the system to oscillate with an amplitude envelope growing about 3% per step, which is what happens.)

Mathematica graphics

(In more detail, there is a 2-dimensional subspace where solutions grow without bound, a 1-dimensional subspace (spanned by $(0,0,0,1)$ with eigenvalue $1$) where the solutions neither grow nor shrink, and a 1-dimensional subspace where the solutions are stable. Any given initial condition projects onto these three subspaces. If the initial condition has a nonzero projection onto the 2-dimensional subspace, that part of the solution will oscillate with magnitude that increases without bound.)

Let's try placing hard limits of $0$ and $10$ on each of $D$, $P$, and $R$ and see what happens.

\begin{align*} D(t) &= \max\{0, \min\{ 10, D(t-1) + \left( 1 - \frac{P(t-1)}{10} \cdot 1 \right) \}\} \\ P(t) &= \max\{0, \min\{ 10, P(t-1) - \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) + \left( 1 - \frac{R(t-1)}{10} \cdot 1 \right) \}\} \\ R(t) &= \max\{0, \min\{ 10, R(t-1) + \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) \}\} \end{align*}

With these hard limits in place, the system stabilises about as quickly as could be expected when the initial changes in $D$ and $R$ are about $+5/10$.

Mathematica graphics

Is there something less draconian we could do? Let's see whether we can make the system stable by altering the diagonal of the square matrix (i.e., reduce the memory in the three functions). We look at this matrix $$\begin{pmatrix} x & \frac{-1}{10} & 0 & 1 \\ \frac{1}{10} & x & \frac{-1}{10} & 0 \\ \frac{-1}{10} & 0 & x & 1 \\ 0 & 0 & 0 & 1 \end{pmatrix} \text{.} $$ Numerically searching for the $x$ that makes the magnitudes of each of the eigenvalues be $1$, we find $x = 0.95911\dots$ should be the boundary between stable and unstable behaviour. If we take $x = 0.9$, the analysis above says the system should be stable. \begin{align*} D(t) &= 0.9 D(t-1) + \left( 1 - \frac{P(t-1)}{10} \cdot 1 \right) \\ P(t) &= 0.9 P(t-1) - \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) + \left( 1 - \frac{R(t-1)}{10} \cdot 1 \right) \\ R(t) &= 0.9 R(t-1) + \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) \end{align*}

Mathematica graphics

With $x = 0.95911$ (where the eigenvalue magnitudes are $1$, $0.89\dots$, and $0.999995\dots$), we expect a short-lived component of the solution (decaying by about 1-0.89 = 11% per step) and a nearly stable component of the solution. \begin{align*} D(t) &= 0.95911 D(t-1) + \left( 1 - \frac{P(t-1)}{10} \cdot 1 \right) \\ P(t) &= 0.95911 P(t-1) - \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) + \left( 1 - \frac{R(t-1)}{10} \cdot 1 \right) \\ R(t) &= 0.95911 R(t-1) + \left( 1 - \frac{D(t-1)}{10} \cdot 1 \right) \end{align*}

Mathematica graphics Mathematica graphics

This system slightly exceeds the maximum, $D$ rises to $10.1535\dots$, but avoids negative values.

We can generalize this to allow the $D'$, $P'$, and $R'$ values to be different and have values of $\alpha$, $\beta$, $\gamma$, and $\eta$ different from $1$. The resulting constraints on the memories for the three functions to produce a stable system are straightforward, but not easy to find. Adding non-negativity and maximality constraints make this less straightforward, but still not easy.

6
On

Let's start with a generic version of your system, including memory constants (modeled on a single-pole filter, although not equivalent to one) as shown in my other answer to make it possible to get non-divergent behaviour. \begin{align*} D(t) &= m_d D(t-1) + \left( 1 - \frac{P(t-1)}{P'} \cdot \alpha \right) \\ P(t) &= m_p P(t-1) - \left( 1 - \frac{D(t-1)}{D'} \cdot \beta \right) + \left( 1 - \frac{R(t-1)}{R'} \cdot \gamma \right) \\ R(t) &= m_r R(t-1) + \left( 1 - \frac{D(t-1)}{D'} \cdot \eta \right) \\ D(0) &= d_0 \\ P(0) &= p_0 \\ R(0) &= r_0 \end{align*}

We wish to impose conditions so that $0 \leq D \leq D'$, $0 \leq P \leq P'$, and $0 \leq R \leq R'$.

We can write this system as $$ \begin{pmatrix} D(t) \\ P(t) \\ R(t) \\ 1 \end{pmatrix} = \begin{pmatrix} m_d & \frac{-\alpha}{P'} & 0 & 1 \\ \frac{\beta}{D'} & m_p & \frac{-\gamma}{R'} & 0 \\ \frac{-\eta}{D'} & 0 & m_r & 1 \\ 0 & 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} D(t-1) \\ P(t-1) \\ R(t-1) \\ 1 \end{pmatrix} $$

The eigenvalues of this matrix are $1$ and $\frac{1}{D'P'R'}$ times each of the three roots of $$ x^3 - D'P'R'(m_d + m_p + m_r) x^2 + (D'P'R')^2 \left(m_d m_p + m_d m_r + m_p m_r + \frac{\alpha \beta}{D' P' R'} \right) x - (m_r R')(D'P'R')^2 \left(m_d m_p D' P' + \alpha \beta - \frac{\alpha \gamma \eta}{m_r R'}\right) \text{.} $$ The discriminant of this cubic is $$ D'^3 P'^3 R'^4 ((D' (m_d - m_p)^2 P' - 4 \alpha \beta) (D' (m_d - m_r) (m_p - m_r) P' R' + R' \alpha \beta)^2 + 2 D' (m_d + m_p - 2 m_r) P' R' \alpha (D' (2 m_d - m_p - m_r) (m_d - 2 m_p + m_r) P' - 9 \alpha \beta) \gamma \eta - 27 D' P' \alpha^2 \gamma^2 \eta^2) \text{.} $$ If this is zero, the eigenvalues are real and two of them are equal. This is positive if the eigenvalues are distinct real numbers and is negative if there are one real and two complex conjugate eigenvalues. (These three outcomes are predicated on the fact that all the parameters of this system are real.)

For each eigenvalue,

  • If it is real and greater than $1$ the projection of the initial conditions (the vector $(d_0, p_0, r_0, 1)$) onto its corresponding eigenvector will exponentially diverge.
  • If it is real and equal to $1$, the projection of the initial conditions onto its corresponding eigenvector will be present in the solution forever.
  • If it is real and less than $1$, the projection of the initial conditions onto its corresponding eigenvector will exponentially decay.
  • If it is non-real, then the projection of the initial conditions onto its corresponding eigenvector will oscillate. And that projection exponentially diverges, maintains constant amplitude, or exponentially decays depending on whether the magnitude of the eigenvalue is greater than $1$, equal to $1$, or less than $1$, respectively.

The eigenvector corresponding to the eigenvalue $1$ is $$\left( \frac{-(D' ((m_p-1) (m_r-1) P' R' + \alpha \gamma)}{D' (m_d-1) (m_p-1) (m_p-1) P' R' + (m_r-1) R' \alpha \beta - \alpha \gamma \eta)}, \frac{P'e ((m_r-1) R' \beta - \gamma (D' (m_d-1) + \eta))}{(m_r-1) R' (D' (m_d-1) (m_p-1) P' + \alpha \beta) - \alpha \gamma \eta}, \frac{-(R' (\alpha \beta + (m_p-1) P' (D' (m_d-1) + \eta))}{(m_r-1) R' (D' (m_d-1) (m_p-1) P' + \alpha \beta) - \alpha \gamma \ \eta)}, 1 \right) \text{,} $$ so the projection of the initial vector $(d_0, p_0, r_0, 1)$ onto this vector is a constantly preserved component of the solution. ("The DC component" in some settings.)

Explicitly algebraically finding the other three eigenvalues and their eigenvectors is prohibitively expensive computationally, so one instead picks values of the parameters, finds the eigensystem, and then projects the initial conditions onto each of the eigenvectors. The resulting solution to the system is a sum of components, one from each eigenvalue, and one checks whether the solution ever becomes negative. As described, the maximum and minimum values of the solution depend linearly on the initial conditions, so you can find (linear) constraints on the initial conditions to ensure only properly bounded solutions are produced.