I am minimizing a non convex function $f$ defined over the positive semidefinite cone $S_+^n$ through linearization methods (sequential linear/convex approximation) and I would like to constraint the problem in such a way that at every iteration, the new matrix at step $k+1$, i.e. $S^{(k+1)}$ is "not so far" from the matrix at the previous iteration $S^{(k)}$ (This goes under the name sequential convex\linear approximation methods).
Formally:
$$\min_S f(S) \\ \text{s.t. } S \in S_+^n \\ \qquad dist(S,S^{(k)})< \epsilon$$
where $dist(S,S^{(k)})$ encodes a notion of trust-region.
Which is a good trust region in such a way the two matrix are similar? In other words, which metric between the two function can be defined in such a way the cost function evaluated at the two matrices does not differ so much?
What about $\|S-S^{(k)}\|_{F}^2< \epsilon$? Which other metrics are suitable for matrices?
Frobenius norm is an obvious one to consider. As appropriate, you could consider a weighted Frobenius norm, where elements are weighted according to their importance or perhaps (if known) sensitivity of the objective function to the particular elements. A rationale for use of Frobenius norm is that it is the length of the matrix step corresponding to the inner product distance of the matrix step (difference of the matrices), i.e., $\sqrt{\text{trace}((S^{(k)-S)^T*(S^{(k)-S))} = \text{Frobenius norm}$.
An alternative, but still easy to deal with (even easier than Frobenius norm), is to impose an infinity norm constraint on the individual elements, i.e., each element must be within some specified amount between the two matrices. So this is just adding element-wise lower and upper bound constraints. You can view this as being a constraint on the infinity norm of the difference of the vectorized matrices. (Of course, the Frobenius norm constraint is just a constraint on the (vector) two-norm of the difference of the vectorized matrices.).
The above approaches are essentially vector-level metrics which don't take into account matrix-level properties (well, Frobenius norm sort of does based on inner product). The approaches below are based on matrix-level metrics.
Getting to the more exotic and adding significantly to the computational burden of solving the individual trust region problems, you could impose a constraint on Kullback–Leibler divergence (as applied to the covarianace matrices of equal mean multivariate Normal distributions), which brings in an exponential cone constraint to deal with logdet. However, this constraint is asymmetric between the two matrices (order matters), and only one "order" will be convex.
Even more computationally demanding, but perhaps "aesthetically" pleasing, is Quantum Relative Entropy https://github.com/hfawzi/cvxquad . Although it is asymmetric between the two matrices, it is convex for either order, and hence (at the expense of doubling the number of constraints) can by symmetrized by adding both orders. Unfortunately, current approaches for dealing with Quantum Relative Entropy require incorporating large dimensional SDP (LMI) constraints (see the table on p.3 of https://arxiv.org/pdf/1705.00812.pdf), so you will be left with additional, larger SDP constraints than your original problem had.