Let $A=[a_{ij}(x)]$ be a non singular matrix valued function with inverse $A^{-1}=B=[b_{ij}(x)]$
I am trying to use the chain rule to justify $\dfrac{\partial}{\partial x^i} (\log|\det A|)=\dfrac{(\operatorname{cof}A)_{rs}}{det A} \dfrac{\partial a_{rs}}{\partial x^i}=b_{rs} \dfrac{\partial a_{rs}}{\partial x^i}$
The solutions just say the proof follows by noting the expansion of determinant by rows $$\det A=\sum^n _{r=1} A_{ir} (\operatorname{cof} A)_{ir}$$ for any fixed $1 \leq i \leq n$ and then using the chain rule.
The proof I have from the book "Tensor calculus" by Schaums outline on page 106 is as given.
By the chain rule $$\frac{\partial}{\partial x^i} (\log |\det A|)=\frac{1}{\det A} \frac{\partial}{\partial x^i} (\det A)=\frac{1}{\det A} \frac{\partial}{\partial a_{rs}} (\det A) \frac{\partial a_{rs}}{\partial x^i}=\frac{A_{rs}}{\det A} \frac{\partial a_{rs}}{\partial x^i}=b_{sr} \frac{\partial a_{rs}}{ax^i}$$
Where does $\det A=\sum^n _{r=1} A_{ir} (\operatorname{cof} A)_{ir}$ come into it?
The same question essentially was asked in Matrix identity involving trace and a final formula is $$d_t(\log\det A_t)=\operatorname{trace}\left(A_t^{-1}\;A_t^\prime\right)$$ where $t$ is your $x^i$. Of course, if you are actually asking about the derivative of the determinant map then you need to see the links therein, e.g. to Jacobi's formula.