Taylor expansion in linear stability analysis of diffusion-driven instability

263 Views Asked by At

We are shown an (apparently trivial) Taylor expansion for a system of reaction-diffusion equations (re Turing stability), where $D$ is a constant: $$ \frac{\partial\vec u}{\partial t} = D\nabla^2\vec u\,+\,\bar F (\vec u) $$ For a small perturbation $\vec w$ about a steady-state $\vec u_s$ (s.t. $\vec F(\vec u_s)=0$), we have: $$ \vec u = \vec u_s+\vec w $$ And the taylor series linear expansion is given as: $$ \frac{\partial\vec w}{\partial t} \approx D\nabla^2\vec w\,+\,\vec F (\vec u_s) + \mathbf{J}\vec w $$ Where $\mathbf{J}$ is the jacobian of $\vec F$ evaluated at $\vec u_s$, and of course the second term above is zero.

I'm a little confused as to how this final equation is reached - I tried doing out the taylor expansion myself but got $\vec u_s$ in the first term instead of $\vec w$.

1

There are 1 best solutions below

0
On

Solved the issue myself. Taking the TS expansion of $\mathbf{F}(\mathbf{u_s}+\mathbf{w})$, we achieve the second two terms in the final expression above. Fine. What I didn't understand was why the final expression ommited the term $D\nabla^2\mathbf{u_s}$. In the Turing stability case, we have a spatially uniform steady state, therefore this term is zero and we get the result stated.