Help me find error in ODE for sensitivity analysis of parameters of Lotka-Voltera equation

229 Views Asked by At

I have a Lotka-Voltera model on which i want to perform parameter estimation by calculating the gradients of the parameters using an extended ODE system. I know there are different methods for doing that but i choose to ignore those.

enter image description here

is a Lotka-Voltera model

For it derived the Forward sensivity equations using: $$\dot{s}_{i,j} = \frac{df_j}{dy_i} s_{i,j} + \frac{df_j}{p_i} \text{ where } \dot{y}=f(y,p_1, \ ..., p_I ,t), \ i \in \{1,...,I\}, y \in \mathbb{R}^J, j \in \{1,...,J\} $$

So $s_{i,j}$ would be the sensitivity of the $j$-th component of solution on parameter $i$. This gave me the following extension of the system of differential equations:

enter image description here

I can solve the Lotka-Voltera equations for intital conditions using:

function  lotka_voltera(u,p,t)
    N₁, N₂ = u
    ϵ₁, ϵ₂, γ₁, γ₂ = p # p_1 ... p_4
    return  [N₁*ϵ₁ - γ₁*N₁*N₂, -N₂*ϵ₂ + γ₂*N₁*N₂]
end

u₀ = [1.,1.];
p = [3.,1.,0.5,0.4];
t =(0., 3.);

prob = ODEProblem(lotka_voltera, u₀, t, p)
solution = solve(prob);
plot(solution)

and get a plot like this:

enter image description here

However when implemented the extension of the system of equations in Julia:

function lotka_sensitivity(du, u, p, t)
    du[:,1] = lotka_voltera(u[:,1],p,t)
    N₁, N₂ = u[:,1] # ; s₁ₚ₁ s₂ₚ₁; s₁ₚ₂ s₂ₚ₂; s₁ₚ₃ s₂ₚ₃; s₁ₚ₄ s₂ₚ₄
    p₁, p₂, p₃, p₄ = p
    
    J = [(p₁-p₂*N₂) (-p₂*N₁); (p₄*N₁) (p₄*N₂-p₃)]
    du[:,2] = (J*u[:,2]) .+ [N₁, 0.]
    du[:,3] = (J*u[:,3]) .+ [-N₂*N₁, 0.]
    du[:,4] = (J*u[:,4]) .+ [0., -N₂]
    du[:,5] = (J*u[:,5]) .+ [0., N₂*N₁]
end 

u₀ₚ = hcat(u₀, zeros(2,4)) # the 2nd to fith coulouns are for the sensitivities. Those are 0 as the inital conditions u_0 are fixed and independent of p
prob_sensitivity = ODEProblem(lotka_sensitivity, u₀ₚ, t, p)

solution_sensitivity = solve(prob_sensitivity);
plot(solution_sensitivity; vars=[(1),(2),(3),#=(4),=#(5),#=(6),=#(7),#=(8),(9),(10)=#])

Then the solutions i commented out from plotting grow exponentially with time. This shouldn't be in my understanding.

enter image description here

What did i do wrong? Possible sources of error are:

  1. Did i use the right formula for the sensitivity i want?
  2. Is my extended system correct?
  3. Is it correct that i let the sensitivities start at 0?
  4. Did i implement the ODE correctly?

The complete code in a Pluto notebook which also is a standalone file for reference and also includes what dependencies are necessary. In vorschlag3 i try to calculate the integrated square error in the first component of solution compared to one with different parameters and attempt to do a gradient descent to minimize it. This also fails.

3

There are 3 best solutions below

1
On BEST ANSWER

At some point, you tried to make the code more readable with the Greek letters, and in the course of that change switched the order of parameters, but did not carry through to all places or corrected the order.

In the Volterra-Lotka equations, the first two parameters are birth (B) and death (R) rates and the second pair are the interaction rates. In the sensitivity formulas, you treat the parameters as if the first pair belongs to the first (B) equation and the second pair to the second (R). So in one place you have to switch the roles of the second and third parameter, I think the least change is from

ϵ₁, ϵ₂, γ₁, γ₂ = p # p_1 ... p_4

to

ϵ₁, γ₁, ϵ₂, γ₂ = p # p_1 ... p_4

along with the same switch in the definition of the parameter array.

2
On

Too long for a comment. Have a look at those sensitivity equations obtained with a symbolic processor to check with yours.

$$ \left\{ \begin{array}{l} \underset{1}{\overset{1}{\text{sx}}}'(t)=-p_2 \underset{1}{\overset{2}{\text{sx}}}(t)N_b(t)+\underset{1}{\overset{1}{\text{sx}}}(t) \left(p_1-p_2 N_r(t)\right)+N_b(t)\\ \underset{1}{\overset{2}{\text{sx}}}'(t)=\underset{1}{\overset{2}{\text{sx}}}(t) \left(p_4 N_b(t)-p_3\right)+p_4 \underset{1}{\overset{1}{\text{sx}}}(t) N_r(t) \\ \end{array} \right. $$

$$ \left\{ \begin{array}{l} \underset{2}{\overset{1}{\text{sx}}}'(t)=-p_2 \underset{2}{\overset{2}{\text{sx}}}(t) N_b(t)+\underset{2}{\overset{1}{\text{sx}}}(t) \left(p_1-p_2 N_r(t)\right)-N_b(t)N_r(t) \\ \underset{2}{\overset{2}{\text{sx}}}'(t)=\underset{2}{\overset{2}{\text{sx}}}(t) \left(p_4 N_b(t)-p_3\right)+p_4 \underset{2}{\overset{1}{\text{sx}}}(t) N_r(t) \\ \end{array} \right. $$

$$ \left\{ \begin{array}{l} \underset{3}{\overset{1}{\text{sx}}}'(t)=\underset{3}{\overset{1}{\text{sx}}}(t) \left(p_1-p_2 N_r(t)\right)-p_2 \underset{3}{\overset{2}{\text{sx}}}(t) N_b(t) \\ \underset{3}{\overset{2}{\text{sx}}}'(t)=\underset{3}{\overset{2}{\text{sx}}}(t) \left(p_4 N_b(t)-p_3\right)+p_4 \underset{3}{\overset{1}{\text{sx}}}(t) N_r(t)-N_r(t) \\ \end{array} \right. $$

$$ \left\{ \begin{array}{l} \underset{4}{\overset{1}{\text{sx}}}'(t)=\underset{4}{\overset{1}{\text{sx}}}(t) \left(p_1-p_2 N_r(t)\right)-p_2 \underset{4}{\overset{2}{\text{sx}}}(t) N_b(t) \\ \underset{4}{\overset{2}{\text{sx}}}'(t)=\underset{4}{\overset{2}{\text{sx}}}(t) \left(p_4 N_b(t)-p_3\right)+p_4 \underset{4}{\overset{1}{\text{sx}}}(t) N_r(t)+N_b(t) N_r(t) \\ \end{array} \right. $$

0
On

As others have mentioned, it looks like you switched the order of your parameters when you unpacked them in the lotka_voltera function. You may find it helpful to use ComponentArrays.jl and UnPack.jl for problems like this so you can unpack parameters by name. The benefit of using ComponentArrays instead of NamedTuples is you can also update your Jacobians by name with ComponentArrays. As you'll see below, you can take the outer product of the state vector with itself and with the parameter vector to create state and parameter Jacobian caches, respectively. When you do it this way, you will be able to index into the Jacobians with symbolic names instead of trying to remember which index corresponds to which variable.

using ComponentArrays, DifferentialEquations, Plots, UnPack


## Functions
function lotka!(D, x, p, t)
    @unpack NB, NR = x
    @unpack ϵ₁, ϵ₂, γ₁, γ₂ = p

    D.NB =  ϵ₁*NB - γ₁*NR*NB
    D.NR = -ϵ₂*NR + γ₂*NR*NB
    return nothing
end

function param_jac!(J, x, p)
    @unpack NB, NR = x
    @unpack ϵ₁, ϵ₂, γ₁, γ₂ = p

    J[Val(:NB), Val(:ϵ₁)] =  NB
    J[Val(:NB), Val(:γ₁)] = -NR*NB
    J[Val(:NR), Val(:ϵ₂)] = -NR
    J[Val(:NR), Val(:γ₂)] =  NR*NB
    return nothing
end

function state_jac!(J, x, p)
    @unpack NB, NR = x
    @unpack ϵ₁, ϵ₂, γ₁, γ₂ = p

    J[Val(:NB), Val(:NB)] =  ϵ₁ - γ₁*NR
    J[Val(:NB), Val(:NR)] = -γ₁*NB
    J[Val(:NR), Val(:NB)] =  γ₂*NR
    J[Val(:NR), Val(:NR)] = -ϵ₂ + γ₂*NB
    return nothing
end

function lotka_sensitivity!(D, vars, p, t)
    @unpack x, s = vars
    @unpack params, jacobians = p
    Jp, Jx = jacobians.param, jacobians.state

    param_jac!(Jp, x, params)
    state_jac!(Jx, x, params)

    lotka!(D.x, x, params, t)
    
    D.s .= Jx*s .+ Jp
    return nothing
end


## Inputs
x0 = ComponentArray(NB=1.0, NR=1.0)
params = ComponentArray(ϵ₁=3.0, ϵ₂=1.0, γ₁=0.5, γ₂=0.4)

s0 = x0 * params' * 0
jacobians = (
    state = x0 * x0' * 0,
    param = x0 * params' * 0,
)

u0 = ComponentArray(x=x0, s=s0)
p = (params=params, jacobians=jacobians)

t = (0.0, 3.0)


## Solve
prob = ODEProblem(lotka!, x0, t, params)
sol = solve(prob)

prob_sens = ODEProblem(lotka_sensitivity!, u0, t, p)
sol_sens = solve(prob_sens)


## Plot
p1 = plot(sol, vars=1:2)
plot!(sol_sens, vars=1:2, linestyle=:dash)

p2 = plot(sol_sens)

plot(p1, p2, layout=(2,1), size=(800,600))

plots