Adiabatic elimination of variables from coupled first-order differential equations

229 Views Asked by At

I have asked a related question on the physics stack exchange website, but have realised that this question is actually more about rigorous maths than physics. Suppose I have the following set of coupled differential equations: $$ \frac{d c_i(t)}{dt} = -i \sum_j \Omega_{ij} \; e^{i \omega_{ij} t} \; \cos(\omega_p t + \phi) \; c_j(t) $$ with a set of initial conditions $c_i(t_0)$. The indices $i$, $j$ run over the number of variables as $1,2,...,n$. The real constant $\omega_{ij}$ is defined as $\omega_{ij} = \omega_i - \omega_j$. The $\omega_p$ and $\Omega_{ij}$ are also real constants, where the latter describe the strength of the coupling.

I am interested in a rigorous procedure of adiabatic elimination for such a system of equations, where - under certain assumptions for the relations between $\omega_{ij}, \omega_p, \Omega_{ij}$ and the initial conditions $c_{i0}$ - the evolution of a subset of the variables (the eliminated variables) is tied to the instantaneous values of the other variables (the essential variables) and the dynamic system can be reduced to the smaller number of essential variables.

EDIT: The following section in brackets was part of my original question, but following the comment by I have developed this a bit further and my question has changed slightly. See the text after the closing bracket.

(Let $c_{e}$ denote one such variable that is a candidate for elimination. As a first step, I have written this in the standard form of a first-order differential equation as $$ \frac{d c_e(t)}{dt} + i \Omega_{ee} \cos(\omega_p t + \phi)\;c_e(t) = -i \sum_{j \neq e} \Omega_{ej} \; e^{i \omega_{ej} t} \; \cos(\omega_p t + \phi) \; c_j(t) $$ with the general solution: $$c_e(t) = c_e(t_0) e^{i \Omega_{ee}/\omega_p \sin(\omega_p t_0 + \phi)} e^{-i \Omega_{ee}/\omega_p \sin(\omega_p t + \phi)}\\ + e^{-i\Omega_{ee}/\omega_p \sin(\omega_p t + \phi)} \int_{t_0}^t -i e^{i\Omega_{ee}/\omega_p \sin(\omega_p t^\prime + \phi)} \sum_{j \neq e} \Omega_{ej} \; e^{i \omega_{ej} t^\prime} \; \cos(\omega_p t^\prime + \phi) \; c_j(t^\prime) dt^\prime$$ (Ideally I would want a line break before the second term on the right-hand side, but this does not seem to be working in MathJax).

I am not sure what a suitable way to proceed from here would be. Maybe integrate by parts, but then one would have to show that the resulting remaining integral could be neglected. Or some kind of time-averaging procedure that averages over fast oscillations and leaves only slowly varying terms.
In any case, the derivation should be rigorous and no ad-hoc assumptions that do not directly follow from the equations should be required.)

EDIT: The following treatment is a development of my original question, using projection operator techniques as in the Nakajima-Zwanzig formalism.

The evolution of a state vector in some orthogonal basis is given by $$ \frac{d \mathbf{x}(t)}{dt} = A(t)\,\mathbf{x}(t) $$ Let $\mathcal{P}$ be a projection operator (and shorthand for the corresponding matrix in the given basis) that projects the state onto the subspace of relevant states and $\mathcal{Q}$ be the complementary projection operator onto the space of states that one wants to eliminate, where $\mathcal{P} + \mathcal{Q} = \mathbb{I}$. Then the original equations splits into two coupled equations: $$ \begin{align} \frac{d \mathcal{P}\mathbf{x}}{dt} & = \mathcal{P}A(\mathcal{P}\mathbf{x}) + \mathcal{P}A(\mathcal{Q}\mathbf{x})\\ \frac{d \mathcal{Q}\mathbf{x}}{dt} & = \mathcal{Q}A(\mathcal{Q}\mathbf{x}) + \mathcal{Q}A(\mathcal{P}\mathbf{x}) \end{align} $$ The second equation is formally solved by $$ \mathcal{Q}\mathbf{x} = G(t;t_0)\,\mathcal{Q}\mathbf{x}(t_0) + \int_{t_0}^t G(t;s) \,\mathcal{Q}A(s)\mathcal{P}\, (\mathcal{P}\mathbf{x}(s)) ds $$ where $G(t;t_0)$ is the fundamental matrix solution for $\frac{d G(t;t_0)}{dt} = \mathcal{Q}A(t) G(t;t_0)$ where $G(t_0;t_0) = \mathbb{I}$. If $\mathcal{Q}\mathbf{x}(t_0) = 0$ (either by assumption or by requiring all states with $x_i(t_0)\neq 0$ to be included via $\mathcal{P}$) and the expression is inserted into the equation for $\mathcal{P}\mathbf{x}$, then an exact equation for the evolution within that subspace is obtained, but at the expense of introducing a memory term:
$$ \frac{d \mathcal{P}\mathbf{x}(t)}{dt} = \mathcal{P}A(t)(\mathcal{P}\mathbf{x}(t)) + \mathcal{P}A(t)\,\int_{t_0}^t G(t;s) \,\mathcal{Q}A(s)\mathcal{P}\, (\mathcal{P}\mathbf{x}(s)) ds $$ At this point, approximations have to be made. I can see that the propagator $G(t;t_0)$ can be expanded via the Peano-Baker series or the Magnus expansion, for example, but I am not sure about the integration with $\mathcal{P}\mathbf{x}(s)$ over all earlier times between $t_0$ and $t$.

What is a rigorous way of reducing this integral in a perturbation series expansion in $\mathcal{Q}A\mathcal{P}$? I thought about using the Taylor series $\mathbf{x}(s) = \mathbf{x}(t) + (s - t) \left.\frac{d \mathbf{x}}{dt}\right|_t + ...$, but I am not confident that this is a valid approach. What methods are usually used for this kind of problem?