A function should be minimized w.r.t. a matrix that has to be positive definite. To ensure positive definiteness, there are two options:
Write a wrapper function that takes the cholesky decompostion as input and feeds the resulting p.d. matrix into the original function. Then do unconstraint minimization of the wrapper function.
Do constraint minimization on the original function, by e.g. imposing positive eigenvalues.
Which of the two options is better?
Before you reinvent the wheel, do note that optimization over semidefinite matrices (semidefinite programming) is a very well established field with lots of theory, applications, and generally available solvers.
Essentially all methods to solve these problems work in what you could say is your category (2), by using (typically) interior-point primal-dual solvers. Practical examples include SeDuMi, SDPA,SDPT3, CSDP, Mosek, DSDP, etc.
There is one singular exception though, and that is a method proposed by Burer and Monteiro (implmented in the solver SDPLR). Here they parameterize the matrix using a factorized representation as you describe in (1). This leads to a non-convex problem (in contrast to the underlying problem which is convex), but surprisingly it performs well, and it has recently been analysed further, and it has been proven that it does not suffer from local minima, despite non-convexity.