Let $K$ be a convex body in $\mathbb{R}^n$ and set $f:\textrm{SL}(n)\rightarrow \mathbb{R}$ as $f(T)=\textrm{Vol}_n (TB\cap K)$ where $B$ is the Euclidean unit ball. How can we find extreme points of $f$?
What I'm looking for is some Taylor expansion of $f$, so I may write for matrices such as $Q=I_n + \epsilon F$ something in the line of $$f(Q)=f(I_n)+\epsilon f'(Q)$$ where $f'$ is a directional derivative of some sort of $f$. I believe this should amount to something like $f'(T)=\textrm{Vol}_{n-1} (\partial TB\cap K)$, but this is pure intuition, I'm not sure how this can be proven.
Let us first formalise the idea of directional derivatives of matrices
Derivatives of variable matrices are usually expressed as Lie derivatives. The basic object is a Lie group, i.e., a differentiable manifold that has a group structure such that the group operations are differentiable. In our case this is the $(n-1)$-dimensional manifold $SL(n)$ consisting of all real $n\times n$ matrices with determinant $1.$ They are also precisely the linear transformations of $\mathbb R^n$ that preserve volume and orientation.
Lie theory considers $1$-parameter subgroups: differentiable homomorphisms from the simplest possible Lie group $(\mathbb R,+)$ to the Lie group under study:
$$T:\mathbb R\to SL(n):t\mapsto T_t,\hskip1cm T_{s+t}=T_sT_t.$$
The derivatives at $0$ of all such possible subgroups form the tangent space of the differentiable manifold at the unit element $T_0=I$, which in this context is called the Lie algebra. Our Lie algebra is denoted ${\mathfrak{sl}(n)}$ and it consists of all $n\times n$ matrices with trace $0.$
The one-parameter group generated by a matrix $A\in\mathfrak{sl}(n)$ is given by the exponential mapping
$$\exp:\mathfrak{sl}(n)\to SL(n):A\mapsto\exp(A)=\sum_{i=0}^\infty\frac{A^i}{i!}$$
which answers your question about a power series expansion.
The derivative of $\textrm{Vol}_n (T_tB\cap K)$ is more easily evaluated if we replace the indicator functions of the compact sets $B$ and $K$ with differentiable functions $\phi$ and $\psi$ that approximate them. So we are looking at the quantity
$$V_t=\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)ds$$
Let us evaluate the derivative of $V_t$ at $t=0.$
$$\eqalign{ \frac{dV_t}{dt}(t=0)&=\frac{d}{dt}\int_{\mathbb R^n}\phi(T_t^{-1}x)\psi(x)dx\\ &=\int_{\mathbb R^n}\frac{d\phi(T_t^{-1}x)}{dt}\psi(x)dx\\ &=\int_{\mathbb R^n}\nabla\phi\cdot (-A)x\psi(x)dx\\ }$$
As $\phi$ approaches the indicator of $K$ its gradient converges to a distribution that is concentrated on $\partial K$ and models the inward normal $(-n)$ of $K$ (since $K$ is convex its boundary has an inward normal almost everywhere). Thus we have
$$\eqalign{ \frac{dV_t}{dt}(t=0)&=\int_{\partial K\cap B}Ax\cdot n\ dS\\ }$$
Alternatively, notice that $Ax$ is a divergence-free vector field (because the trace of $A$ is $0$) so the integral is also equal to
$$\eqalign{ \frac{dV_t}{dt}(t=0)&=\int_{\partial B\cap K}Ax\cdot n\ dS\\ }$$
(the reason why these two integrals do not have opposite signs, as one would expect from partial integration, is that the interpretation of $n\ dS$ as an outward normal vector is different according to whether the 'outward' means out of $K$ or out of $B$)
The second integral is different from your intuitive idea but there is a close resemblance.
Higher derivatives of $V_t$ are not guaranteed to exist without additional conditions on the shape of $K.$ This can be intuitively understood by noticing that the first derivative is an integral where not only the integrand, but also the area of integration depends on $t.$ In fact the first derivative need not be a continuous function as can be seen in $2$ dimensions by letting $B=B(c=(5;0),r=1),$ $K$ the upper half of $B$ and $A=\left(\begin{matrix}0&1\\-1&0\end{matrix}\right)$ (generator of the rotations around the origin).