I have trouble with understanding vector calculus notation in a PDE book, much less matrix analysis. There is a proof in Majda's book "Vorticity and Incompressible Flow" very early on that I am not following:
Let $X$ be the particle-trajectory mapping (taking values in $\Bbb R^n$, for $n=2,3$) of a smooth velocity field $v$, i.e. it solves the ODE for every $\alpha$, \begin{cases}\frac{d}{dt}X(\alpha,t) &= v(X(\alpha,t),t) \\ X(\alpha,0) &= \alpha\end{cases}Then if $$J(\alpha,t) := \text{det}\left[ \frac{\partial X^i}{\partial \alpha^{\ j}}\right]_{i,j}(\alpha,t)$$ It satisfies $$ \frac{\partial}{\partial t} J(\alpha,t) = \text{div }v|_{(X(\alpha,t),t)}J(\alpha,t)$$
There is only the one line I can't follow- $$\frac{\partial J}{\partial t} = \sum_{i,j} A^j_i\frac{\partial}{\partial t} \frac{\partial X^i}{\partial \alpha^{\ j}}(\alpha,t)$$ Where $A^j_i$ is the minor of the entry $ \frac{\partial X^i}{\partial \alpha^{\ j}}$ in $J$. Can someone explain where this comes from?
edit For completeness re Normal Human's answer, I check below the Leibniz rule for multilinear functions:
\begin{align} \det A(s) = \det\begin{pmatrix}r_1(s) \\ … \\ r_n(s)\end{pmatrix} &= \det\begin{pmatrix}r_1(s) - r_1(t) \\ … \\ r_n(s)\end{pmatrix} + \det\begin{pmatrix}r_1(t) \\ r_2(s) \\ … \\ r_n(s)\end{pmatrix} \\ &… \\ &= \sum_{k=1}^{n+1} \det\begin{pmatrix} r_1(t) \\ \dots \\ r_{k-1}(t) \\ r_k(s) - r_k(t) \\ r_{k+1}(s) \\ … \\ r_n(s)\end{pmatrix} \end{align}
Note that $\det A(t)$ is the last term of the sum; subtracting it and dividing by $\frac{1}{s-t}$, which we pull into the $k$th row by linearity and taking limits gives the "Leibniz rule".
I believe I came up with a separate proof of this proposition, which I include partially as a way for me to remember. By the permutations definition of determinant, \begin{align} \frac{\partial}{\partial t}J &= \sum_\sigma \text{sgn } \sigma \frac{\partial}{\partial t} ∏_{i} \frac{\partial X^i}{\partial \alpha^{\sigma(i)}} \end{align} Leibniz Rule tells us that $ \frac{\partial}{\partial t}∏_{i} \frac{\partial X^i}{\partial \alpha^{\sigma(i)}} = \sum_k \frac{\partial}{\partial t} \frac{\partial X^k}{\partial \alpha^{\sigma(k)} } \prod_{i≠ k}\frac{\partial X^i}{\partial \alpha^{\sigma(i)}}$. Writing $j=\sigma(k)$ as shorthand and interchanging derivatives by smoothness of $X$,
$$\frac{\partial^2 X^k}{\partial t \partial \alpha^{\ j}} = \frac{\partial}{\partial \alpha^{\ j}}\frac{\partial X^k}{\partial t } = \frac{\partial}{\partial \alpha^{\ j}}v^k(X(\alpha,t),t) = \sum_\ell \partial_{x^\ell}v^k \frac{\partial X^\ell}{\partial \alpha^{\ j}}$$ Turning $\sum_\ell$ into the outermost sum, we obtain the result we want in the $\ell = k$th term, as that term is $$ \sum_\sigma \text{sgn } \sigma \partial_{x^k}v^k \frac{\partial X^k}{\partial \alpha^{\sigma(k)}} \prod_{i≠k} \frac{\partial X^i}{\partial \alpha^{\sigma(i)}} =\text{div } v|_{(X(\alpha,t),t)} J(\alpha,t)$$ For $\ell≠ k$, we are taking the determinant of the same matrix but with the $k$th row replaced with the $\ell$th row. Since we have a repeated row, the determinant is 0. This proves the result.
It's called Jacobi's formula for the derivative of determinant. The Wikipedia article gives a detailed derivation full of indices, so I'll balance that with a formula-free sketch: