Consider fixed,linearly independent, positive definite matrizes $A_i$ and a fixed, nonzero $x$.
For $\lambda \in \mathbb{R}^k$, define $$A(\lambda) = (\sum_{i}^{k} e^{\lambda_i} A_i)^{-1}$$
and $$f(\lambda) = x^T A(\lambda) x + \ln \det A(\lambda)^{-1}$$.
I am trying to find out, whether $f$ is convex. The motivation behind this, is that I have seen this as a way to parameterize a covariance matrix in terms of the sum of some fixed, known matrizes, and was asking myself whether the ensuing optimization problem is convex.
My approach so far: I want to consider the hessian of $f$ and check it for positive-definite-ness. We have that $$\frac{\partial^2}{\partial \lambda_i \partial \lambda_j} f(\lambda) = x^T \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} A(\lambda) x + \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} \ln \det A(\lambda)^{-1}$$
and thus we want to check whether for all $y$,
$$ \sum_{i,j} y_j \left(x^T \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} A(\lambda)x + \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} \ln \det A(\lambda)^{-1}\right) y_j > 0$$
I get that
$$ \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} A(\lambda) = \\ A(\lambda) \left(e^{\lambda_i} A_i A(\lambda) e^{\lambda_j} A_j + e^{\lambda_j} A_j A(\lambda) e^{\lambda_i} A_i - \delta_{i,j} e^{\lambda_i} A_i \right)A(\lambda) ,$$
and $$ \frac{\partial^2}{\partial \lambda_i \partial \lambda_j} \ln \det A(\lambda)^{-1} = \frac{\partial}{\partial \lambda_i}\text{tr}[ A(\lambda) \frac{\partial}{\partial \lambda_j}\ A(\lambda)^{-1}] \\ = \text{tr}[\frac{\partial}{\partial \lambda_i} A(\lambda) \frac{\partial}{\partial \lambda_j}\ A(\lambda)^{-1}] +\text{tr}[ A(\lambda) \frac{\partial^2}{\partial \lambda_i \partial \lambda_j}\ A(\lambda)^{-1}] = \text{tr}[-A(\lambda) e^{\lambda_i}A_i A(\lambda) e^{\lambda_j} A_j] + \text{tr}[ A(\lambda) \delta_{i,j} e^{\lambda_i}A_i]] $$
One can also summarize this, using the cyclic property of the trace and symmetry,as $$\frac{\partial^2}{\partial \lambda_i \partial \lambda_j} f(\lambda) = \text{tr}[A(\lambda) \left(e^{\lambda_i} A_i A(\lambda) e^{\lambda_j} A_j\right)A(\lambda) (2 xx^T - A(\lambda)^{-1})] + \text{tr}[A(\lambda)\delta_{i,j} e^{\lambda_i}A_i A(\lambda) (A(\lambda)^{-1} - xx^T)] $$
but I do not know hot to proceed now.
Any ideas? Is this even the way to go, or is there a simpler way to see it?
I ran some simulations and it doesn't seem to be convex. Here is the code (Python):