I have an optimization problem
\begin{equation}
\mathbf{w}^*= \text{argmax} \sum_{d=1}^{D}\log\left(\frac{|\mathbf{\hat{f}}_{d}^{H}\mathbf{w}|^{2}+A_d}{|\mathbf{\hat{f}}_{d}^{H}\mathbf{w}|^{2}+B_d}\right)\\
\text{subject to}\quad\quad|\hat{\mathbf{h}}_{k}^{H}\mathbf{\mathbf{w}|^{2}}
\ge \tilde{\gamma}_k,\quad\forall k\in\{1,...,K\}\\
|\mathbf{w}^{H}\mathbf{w}|=1
\end{equation}
where $\mathbf{w}$ is the column vector to be optimized, $\mathbf{\hat{f}}_{d}$, $\hat{\mathbf{h}}_{k}$ are all column vectors, and $A_d, B_d$, and $\tilde{\gamma}_k$ are all scalar constants.
It was required to use KKT conditions to solve the problem. I got the conditions but I am not able to solve them to get $\mathbf{w}^*$ and the KKT multiplier values.
Any help is appreciated.
From $\max \: \ln(a)+\ln(b) = \min \: \ln(ab^{-1})$ and monotony of $\ln x$, I'd suggest to reformulate the problem to
$$ \begin{align} & \min_{\mathbf{x}} \frac{\sum\limits_{i=1}^{2D} \sum\limits_j b_{ij}x_j^i}{\sum\limits_{i=1}^{2D} \sum\limits_j a_{ij}x_j^i} \\ \text{s.t.} \quad & \mathbf{x}^T Q_i \mathbf{x} + \mathbf{q}_i^T \mathbf{x} + c_i \ge 0 \qquad i=1...K \\ & \mathbf{x}^T Q_e \mathbf{x} + c_{K+1} = 0, \end{align} $$
where the newly introduced decision variables $\mathbf{x}$ comprise the real and imaginary parts of $\mathbf{w}$ and $a_{ij}$, $b_{ij}$, $c_i$, $\mathbf{q}_i$, $Q_i$ and $Q_e$ cover the coefficients evolving from $\mathbf{f}$, $\mathbf{h}$, $A$, $B$ and $\gamma$. Nominator and denominator of $\ln$'s argument are interchanged.
Does that help?
Example. Assume $D=K=n=2$. Then
$$ \begin{array}{l} \max\limits_{\mathbf{w}} \sum\limits_{d=1}^2 \log \underbrace{\left( \frac{|\hat{\mathbf{f}}_d^H \mathbf{w}|^2+A_d}{|\hat{\mathbf{f}}_d^H \mathbf{w}|^2+B_d} \right)}_{=:\mathcal{A}_d} = \max\limits_{\mathbf{w}} \: \log \mathcal{A}_1 + \log \mathcal{A}_2 = \max\limits_{\mathbf{w}} \log \mathcal{A}_1\mathcal{A}_2 \\ \max\limits_{\mathbf{w}} \log \mathcal{A}_1\mathcal{A}_2 = \min\limits_{\mathbf{w}} -\log \mathcal{A}_1\mathcal{A}_2 = \min\limits_{\mathbf{w}} \log \left(\mathcal{A}_1\mathcal{A}_2\right)^{-1} \end{array} $$ Since $\log x$ monotonically increases with $x$, finding the minimum of $\log x$ is equivalent to finding the smallest value for $x$ that meets the requirements of the constraints. Hence,
$$ \begin{align} & \min\limits_{\mathbf{w}} \log \left(\mathcal{A}_1\mathcal{A}_2\right)^{-1} = \min\limits_{\mathbf{w}} \left(\mathcal{A}_1\mathcal{A}_2\right)^{-1} \\ \text{s.t} \quad & |\hat{\mathbf{h}}_k^H \mathbf{w}|^2 - \tilde{\gamma}_k \ge 0 \quad k = 1,2 \\ & |\mathbf{w}^H \mathbf{w}|^2 - 1 = 0. \end{align} $$
Introducing
$$ \mathbf{w} = \left( \begin{array}{c} w_1 \\ w_2 \end{array} \right) = \left( \begin{array}{c} w_{1r} + iw_{1i} \\ w_{2r} + iw_{2i} \end{array} \right), $$
$$ \mathbf{x} = \left( \begin{array}{c} w_{1r} \\ w_{1i} \\ w_{2r} \\ w_{2i} \end{array} \right), $$
and
$$ \hat{\mathbf{f}}_1 = \left( \begin{array}{c} f_{11r} + if_{11i} \\ f_{12r} + if_{12i} \end{array} \right), $$
$\hat{\mathbf{f}}_2, \hat{\mathbf{h}}_1, \hat{\mathbf{h}}_2$ accordingly. Inserting yields
$$ \begin{align} & \min_{\mathbf{x}} \left( \frac{\left((\hat{f}_{11r}w_{1r}+\hat{f}_{11i}w_{1i}+\hat{f}_{12r}w_{2r}+\hat{f}_{12i}w_{1i})^2+(...)^2+B_1\right)\left((...)^2+(...)^2+B_2\right)}{\left((...)^2+(...)^2+A_1\right)\left((...)^2+(...)^2+A_2\right)} \right) \\ \text{s.t.} \quad & (\hat{h}_{k1r}w_{1r}+\hat{h}_{k1i}w_{1i}+\hat{h}_{k2r}w_{2r}+\hat{h}_{k2i}w_{1i})^2 - \tilde{\gamma}_k \ge 0 \quad k=1,2 \\ & |\mathbf{w}^H \mathbf{w}|^2 - 1 = 0. \end{align} $$
The placeholders $(...)$ comprise quadratic polynomials in $\mathbf{x}$ and are used for the sake of readability. I won't give an explicit mapping from $(\hat{\mathbf{f}}_1, \hat{\mathbf{f}}_2, \hat{\mathbf{h}}_1, \hat{\mathbf{h}}_2, \tilde{\gamma}_1, \tilde{\gamma}_2)$ to the coefficients I have used. Computer algebra systems can do the job.
The KKT $1^{\text{st}}$-order necessary conditions for $\mathbf{x}^*$ to be a local solution to the optimization problem are
$$ \begin{array}{l} \nabla_x \left( \sum\limits_{i=1}^3 \lambda_i c_i - J \right) = \mathbf{0} \\ \lambda_i c_i = 0 \quad \forall i = 1,...,3, \end{array} $$
where $J$ is the cost function. Solving this system of nonlinear equations returns candidates for the local solution to the optimization problem. Note that they need to satisfy $\lambda_i \ge 0$ for the indices of the inequality constraints.
The existence of a closed solution would require a closed solution for rational polynomials of arbitrary degree, which is not available to my humble knowledge.