I've been independently working through Bishop's machine learning textbook (Pattern Recognition and Machine Learning, 2006 link), and at the end of Appendix C (Properties of Matrices) there's a proposition (C.22) stated as such (on page 701), where $x$ is some scalar, $\boldsymbol{A}$ is a square non-degenerate matrix, $\ln$ is the natural logarithm, and $\mbox{Tr}$ is the usual trace function associated with matricies ($\mbox{Tr}(\boldsymbol{A})=\sum A_{ii}$):
$\frac{\delta}{\delta x} \ln |\boldsymbol{A}| = \mbox{Tr} \left( \boldsymbol{A}^{-1} \frac{\delta \boldsymbol{A}}{\delta x}\right)$
It is stated that C.22 can be verified with some previously derived theorems, and then promptly left as an exercise to the reader. After spending a few hours, on it, checking the errata, trawling online for a proof (although my google-fu could be too weak: it turns out typing "trace log derivative discriminant inverse proof" is not the best idea), and then enlisting a friend for an evening, I'm still missing a final (?) step.
First, the previously derived theorems (mentioned in the text as C.33, C.45, C.46, C.47), where $\lambda_i$ is the $i$-th eigenvalue of the matrix $\boldsymbol{A}$, $\boldsymbol{u}_i$ is the associated eigenvector, $x^T$ is the transpose operator, and $|\boldsymbol{A}|$ is the determinant operator applied to matrix $\boldsymbol{A}$:
(C.33) $\boldsymbol{u}_i^T \boldsymbol{u}_j = I_{ij}$ (where $I$ is the identity matrix)
(C.45) $\boldsymbol{A} = \sum \lambda_i \boldsymbol{u}_i \boldsymbol{u}_i^T$
(C.46) $\boldsymbol{A}^{-1} = \sum \frac{1}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T$
(C.47) $|\boldsymbol{A}| = \prod \lambda_i$
And then my derivation thus far, starting with the left side:
$\frac{\delta}{\delta x} \ln\left(|\boldsymbol{A}|\right) = \frac{\delta}{\delta x} \ln \left(\prod \lambda_i \right)$ (via C.47)
$= \frac{\delta}{\delta x} \sum \ln \lambda_i$
$= \sum \frac{\delta}{\delta x} \ln \lambda_i$
$= \sum \frac{1}{\lambda_i} \frac{\delta \lambda_i}{\delta x}$ (applying the chain rule)
Switching to the right side:
$\mbox{Tr}\left( \boldsymbol{A}^{-1} \frac{\delta}{\delta x} \boldsymbol{A} \right)$ $= \mbox{Tr}\left( \left(\sum_i \frac{1}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \right) \frac{\delta}{\delta x} \left(\sum_j \lambda_j \boldsymbol{u}_j \boldsymbol{u}_j^T \right) \right)$ (via C.45, C.46)
$= \mbox{Tr}\left(\sum_i \sum_j \frac{1}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \frac{\delta}{\delta x} \left( \lambda_j \boldsymbol{u}_j \boldsymbol{u}_j^T \right) \right)$
$= \mbox{Tr}\left(\sum_i \sum_j \frac{1}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \left( \frac{\delta \lambda_j}{\delta x} \boldsymbol{u}_j \boldsymbol{u}_j^T + \lambda_j \frac{\delta \boldsymbol{u}_j \boldsymbol{u}_j^T}{\delta x} \right) \right)$ (via the product rule)
$= \mbox{Tr}\left( \left( \sum_i \sum_j \frac{1}{\lambda_i} \frac{\delta \lambda_j}{\delta x} \boldsymbol{u}_i \boldsymbol{u}_i^T \boldsymbol{u}_j \boldsymbol{u}_j^T \right) + \left( \sum_i \sum_j \frac{\lambda_j}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \frac{\delta \boldsymbol{u}_j \boldsymbol{u}_j^T}{\delta x} \right) \right)$
$= \mbox{Tr}\left( \left( \sum_i \frac{1}{\lambda_i} \frac{\delta \lambda_i}{\delta x} \boldsymbol{u}_i \boldsymbol{u}_i^T \right) + \left( \sum_i \sum_j \frac{\lambda_j}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \frac{\delta \boldsymbol{u}_j \boldsymbol{u}_j^T}{\delta x} \right) \right)$ (via C.33)
Now, I can see the end game: if only I could throw away that double-sum from the right side of the equation ($\sum_i \sum_j \frac{\lambda_j}{\lambda_i} \boldsymbol{u}_i \boldsymbol{u}_i^T \frac{\delta \boldsymbol{u}_j \boldsymbol{u}_j^T}{\delta x}$), then I could apply C.47 and be done with the verification. However, I can't see a way to eliminate it, and attempts to prove it equal to zero haven't borne fruit.
Help/pointers/frustratingly vague haikus crafted for mathematical enlightenment would be appreciated. Help classifying/titling the question would also be nice.
You've done the left hand side. I'll do the right hand side.
Let $A_x: (a,b) \rightarrow \mathbb{R}^{n^2}$ such that $A_x$ is differentiable and self-adjoint when viewed as a matrix.
By the spectral theorem, we may find functions $Q_x, \Lambda_x$ where
$Q_x$ and $\Lambda_x$ are differentiable
$Q_x$ orthonormal
$\Lambda_x$ diagonal
$A_x = Q_x \Lambda_x Q_x^T$
Luckily the product rule for derivatives still holds under matrix multiplication: $A_x^{ \prime} = Q_x^{\prime} \Lambda_x Q_x^T + Q_x \Lambda_x^{\prime} Q_x^T + Q_x \Lambda_x Q_x^{T\prime}$
It is not hard to show that $A_x^{-1}= Q_x \Lambda_x^{-1} Q_x^T$ where the inversion is matrix inversion as opposed to function inversion.
Putting it together, we get $A_x^{-1} A_x^{ \prime} = Q_x \Lambda_x^{-1} Q_x^T( Q_x^{\prime} \Lambda_x Q_x^T + Q_x \Lambda_x^{\prime} Q_x^T + Q_x \Lambda_x Q_x^{T\prime})$ which is $Q_x \Lambda_x^{-1} Q_x^T Q_x^{\prime} \Lambda_x Q_x^T + Q_x \Lambda_x^{-1} \Lambda_x^{\prime} Q_x^T + Q_x Q_x^{T\prime}$
At this point it seems we are stuck, but applying the trace operator, and remembering that trace is invariant under under cyclic permutations and similarity, we get $$\text{Tr}(Q_x \Lambda_x^{-1} Q_x^T Q_x^{\prime} \Lambda_x Q_x^T) + \text{Tr}(Q_x \Lambda_x^{-1} \Lambda_x^{\prime} Q_x^T) + \text{Tr}(Q_x Q_x^{T\prime}) = $$
$$ \text{Tr}( Q_x^T Q_x^{\prime} ) + \text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime}) + \text{Tr}(Q_x^{T\prime} Q_x ) = $$
$$ \text{Tr}( Q_x^T Q_x^{\prime} + Q_x^{T\prime}Q_x) + \text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime}) =$$ $$\text{Tr}( (Q_x^T Q_x)^{\prime}) + \text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime}) =$$
$$\text{Tr}( I^{\prime}) + \text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime}) =$$ $$\text{Tr}( 0) + \text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime}) =$$ $$\text{Tr}(\Lambda_x^{-1} \Lambda_x^{\prime})$$