Simulating an ODE

64 Views Asked by At

I am trying to simulate a ODE system written in the caption of Fig. 4 in the paper on Contraction of monotone phase-coupled oscillators.

$$\dot\theta_i=\sum_{j}A_{ij}\cdot\eta(\theta_i-\theta_j)$$

where $A_{ij}$ is the adjcency matrix of complete graph, i.e., $A_{ij}=A_{ji}=+1,\forall i\neq j\in[n]$, $A_{ii}=0,i\in[n]$. And$\eta(x)=-\frac{1}{N(0.1+e^x)}$.

According to the author, the system is almost global synchronizing, i.e., all $\theta_i$ will achieve the same value (module $2\pi$). However, from my simulating result, it converges to the so called splay state.

enter image description here

I suspect there is something wrong in my code. First I thought maybe I wrote the negative value of function $\eta$ in my code, so I change it to its negative. But Here is what I got, which is not sync as well.

enter image description here

Could someone help me out? Many thanks in advance!

Matlab Code:

% Parameters
num_steps = 1; % Number of steps for varying noise_level
learning_rate = 0.1;
max_iterations = 1200000;
tolerance = 1e-10; % Define the tolerance value (adjust as needed) of order parameter for seeing if sync
n = 40;
for j = 1:num_steps
    for l = 1:initial_times % Loop through different initial conditions
        A = ones(n) - eye(n);                
        initial_thetas = pi/2 * rand(1, n); % Generate random values between 0 and alpha for initial_thetas

    % ODE solver
    options = odeset('RelTol', 1e-10, 'AbsTol', 1e-10);
    [t, thetas] = ode45(@(t, y) ode_optimization(t, y, A), [0, max_iterations * learning_rate], initial_thetas, options);
    plot(t,thetas)

    end
end

%% plot thetas
figure
plot(t, thetas, 'k');
xlabel('Time');
ylabel('Thetas');
title('Evolution of Thetas Over Time');
hold off

function order_parameter = compute_order_parameter(thetas)
    n = length(thetas);
    S = sum(exp(1i * thetas));
    order_parameter = abs(S) / n;
end

function y = g(x,n)
    y = -1/(n * (0.1+exp(-x)));
end

function gradient = compute_gradient(thetas, A)
    n = length(thetas);
    gradient = zeros(size(thetas));
    for i = 1:n-1
        for j = i+1:n
            diff = g(thetas(i) - thetas(j),n);
            gradient(i) = gradient(i) + A(i,j) * diff;
            gradient(j) = gradient(j) - A(i,j) * diff;
        end
    end
end

function dthetas_dt = ode_optimization(t, thetas, A)
    gradient = compute_gradient(thetas, A);
        dthetas_dt = -gradient;
end