Everywhere that I've read about performing a Floquet analysis involves numerically solving the system over one period, with the identity matrix as initial condition. My system is fairly complex, has multiple delays, and has long transient dynamics before reaching equilibrium. My problem is that the results of the Floquet analysis do not correspond to the results of a long simulation (i.e., the asymptotic solution to the initial value problem shows a different equilibrium point than suggested by the leading Floquet exponent). If I perform the Floquet analysis, numerically solving the system over, say, 10 periods or more, then the result converges with the simulation. I haven't been able to find any examples of someone that's done this, or anywhere that says this is valid. Is this a violation of the method? I'd also like to know if this raises red flags that maybe I've made a mistake in my model or code somewhere, although I've checked it thoroughly.
I've implemented the numerical routine described in Lemma 2.5 in the following paper, which was adapted for DDEs:
Lemma 2.5 Assume that (E, E+) is an ordered Banach space with E+ being normal and Int(E+) $\neq$ ∅, which is equipped with the norm ||·||E . Let L be a positive bounded linear operator. Choose v0 ∈ Int(E+) and define an = ||Lvn-1||E , vn = $\frac{Lv_{n-1}}{ > a_{n}}$, ∀n ≥ 1. If $\lim_{(n→+∞)}a_n$ exists, then r(L) = $\lim_{n→+∞}a_n$.
The leading Floquet exponent is then calculated as $\mu$ = $\frac{log(r(L))}{\omega}$, where $\omega$ = the period (i.e., 1 year)