So I am currently dealing with an optimization problem, where my approach is to apply an MM algorithm. In this, I want to replace my objective function with the following inequality, seen as (25) in the photo I added.

As we can see, in the third term of this approximation, we require calculating the largest eigenvalue of the hessian matrix of our objective function. However, this can be computationally complex, so I am wondering if anyone here has heard of a way of finding an upper bound to the largest eigenvalue of the hessian, without having to explicitly calculuate the hessian matrix?
If it is not actually possible to find such an upper bound without having to explicitly calculate the Hessian matrix, is there some other low-complexity method? So far, my strategy is to calculate the hessian, and then calculate the largest absolute row sum or column sum, since this will certainly work as an upperbound.
Some helpful information: objective function = $Trace(P(HH*)^{-1})$ where ()* denotes the complex conjugate transpose, P is a diagonal "power allocation" matrix with non-negative entries, and H is a standard channel matrix (specifically MIMO wireless communication scenario).
If there is any extra information I should provide, please let me know. I really appreciate any help
EDIT: Extra Information
All matrices and variables are in the complex regime. The matrix H is a composite channel matrix made up in the following manner: H = $H_1(Phi)H_2 + H_d$ where $H_1, H_2$ and $H_d$ are standard channel matrices, and (Phi) is a diagonal matrix made up of unit-modulus phase vectors $e^{j*theta}$
This is the standard set up for a wireless communication scenario using an Intelligent Reflecting Surface. In one path, we simply connect 2 stations using the direct channel $H_d$ . In the other path, we connect 2 stations via an intelligent reflecting surface, where the channels which connect station 1 to the mirror, and the mirror to station 2 are described by $H_1$, $H_2$.
so the optimization problem is:
minimize $Trace(P(HH*)^{-1})$ by varying theta, under the constraints that every diagonal entry of phi has unit modulus: |$theta_i$| = 1 for all i from 1 to N
The power allocation matrix is constant and does not change
So we want to vary the angle of every phase vector such that this objective function is minimized.
If we recast the objective function as a function on the "vectorized" channel matrix, then we can get the derivative of your function pretty straightforwardly. Let $h$ denote the (column-major) vectorization $h = \operatorname{vec}(H)$. We can rewrite your function as $$ \begin{align} f(h) &= \operatorname{tr}(PHH^*) = \operatorname{tr}([PH]H^*) \\ & = \operatorname{vec}(H)^*\operatorname{vec}(PH) \\ & = \operatorname{vec}(H)^*(I \otimes P)\operatorname{vec}(H) = h^*(I \otimes P)h. \end{align} $$ That is, $f$ is a "quadratic" function with Hessian matrix $2(I \otimes P)$. The eigenvalues of $I \otimes P$ are equal to the eigenvalues $P$ (but with greater multiplicity), so the maximal eigenvalue of the Hessian is simply $2$ times the maximal eigenvalue of $P$, which (because $P$ is a diagonal matrix with non-negative entries) is simply $2$ times the maximal entry of $P$.
With the additional information, are given that $H$ (or equivalently $h$) is dependent on a parameter $\theta = (\theta_1,\dots,\theta_j)$, with $$ H = H_1\operatorname{diag}(\exp(j\theta))H_2 + H_d. $$ Let $x_1,\dots,x_n$ denote the columns of $H_1$ and let $y_1,\dots,y_n$ denote the columns of $H_2^\top$. We can write $$ H = H_d + \sum_{i=1}^n \exp(j\theta_i)x_iy_i^\top, $$ which with vectorization tells us that $$ h = h_d + \sum_{i=1}^n \exp(j\theta_i)\,y_i \otimes x_i $$ where $h_d = \operatorname{vec}(H_d)$ and $\otimes$ is a Kronecker product. Plugging this into $f$ gives us the objective as a function of the $\theta_i$. In particular, we have $$ f(\theta) = [\text{const.}] + 2 \operatorname{Re}\left[h_d^*(I \otimes P)\sum_{i=1}^n \exp(j\theta_i)\,y_i \otimes x_i \right] \\+ \sum_{p,q = 1}^n \exp[j (\theta_q - \theta_p)](y_p^*y_q)(x_p^*Px_q). $$ Perhaps you will find this explicit form easier to work with. I believe that the second term will only affect the diagonal elements of the Hessian.
With that, I suspect you can get a reasonably nice expression for the elements of the Hessian, and as an upper bound to the largest eigenvalue you can use the Frobenius norm of this Hessian.