I recently encountered the following optimization problem: $$ \max_{z_i} \frac{\Big|\sum_{i=1}^n a_i z_i \Big|^2}{e + \Big|\sum_{i=1}^n b_i z_i\Big|^2}, \qquad \text{such that } |z_i| \leq 1, \quad \forall 1 \leq i \leq n. $$
where $z_i \in \mathbb C$. Also, $a_i, b_i \in \mathbb C$ and $e > 0 \in \mathbb R$ are given constants.
If it helps, we may assume $|a_i| = a \neq b = |b_i|$ for all $i$. Is there any way to solve this problem, or find bounds? I believe the problem is not convex.
I've tried substituting $z_k = |z_k| e^{i\theta_k}$ and expanding, but that seems to complicate the problem even more.
It doesn't look like one can find an explicit formula, so I presume that you will be content with a decently fast numerical algorithm for computing an approximate solution. The approach below is actually based on some duality considerations in linear programming but I'll make a few shortcuts and present it in the simplest form omitting the explanations of how one can derive it from general principles.
Since you are willing to assume that $|a_i|=a$, $|b_i|=b$ for all $i$, let's rescale by dividing the numerator by $a^2$ and the denominator by $b^2$ to get $|a_i|=|b_i|=1$ for all $i$ ($e$ will be replaced by $e/b^2$, of course). Furthermore, changing the variables to $b_iz_i$ (and the coefficients $a_i$ to $a_i\bar b_i$), we reduce the problem to finding the maximum of $$ \frac{\left|\sum_i a_iz_i\right|^2}{e+\left|\sum_i z_i\right|^2} $$ Note that those reductions are not really needed and we can treat the general case in the same way but all formulas will get more complicated.
Now let us ask an auxiliary question. Suppose we know that $\left|\sum_i z_i\right|=s$. What is the maximum $F(s)$ of $\left|\sum_i a_iz_i\right|$? If we can answer this question, we will reduce our optimization problem to finding the maximum of $\frac{F(s)^2}{e+s^2}$, which is a one-dimensional maximum.
Note that the values of $s$ we are interested in are from $0$ to $s_0=\left|\sum_i\bar a_i\right|$ because we can always take $z_i=\bar a_i$ to maximize the numerator and have exactly $s_0$ in the denominator, so considering the variables with $s>s_0$ makes no sense. Thus, let us assume below that $s<s_0$ (Clearly, $F(s_0)=n$).
Let $\zeta\in\mathbb C$. We always have the trivial bound $$ \left|\sum_i a_iz_i\right|\le |\zeta|\left|\sum_i z_i\right|+\sum_i|a_i-\zeta||z_i|= s|\zeta|+\sum_i|a_i-\zeta|=G_s(\zeta)\,. $$ Thus, $F(s)\le\min_{\zeta}G_s(\zeta)$.
$G_s$ is a convex function on $\mathbb C$ tending to $+\infty$ at infinity, so it has a point of minimum. To find it numerically is not a big hustle (you can try (conjugate) gradient descent or differential evolution with 5-6 points or your other favorite method for convex minimization on the plane) though it still requires some care. Let now $\zeta$ be the point of minimum.
Notice that $\zeta\ne 0$ because the gradient at $0$ of the sum part is $\sum_i\bar a_i$, which is larger in absolute value than $s$, so deviating from $0$ in the direction opposite to the gradient of the sum part is beneficial. Assume that $\zeta$ is not one of $a_i$. Then we have $$ \nabla G_s(\zeta)=s\frac{\bar\zeta}{|\zeta|}+\sum_i\frac{\overline{\zeta-a_i}}{|\zeta-a_i|} $$ so, if we put $z_i=-\frac{\overline{\zeta-a_i}}{|\zeta-a_i|}$, we shall have $$ \sum_i z_i=s\frac{\bar\zeta}{|\zeta|} $$ and $$ \sum_i a_iz_i=\zeta\sum_i z_i+\sum_i (a_i-\zeta)z_i=s|\zeta|+\sum_i|a_i-\zeta|=G_s(\zeta)\,, $$ so by this example $F(s)= G_s(\zeta)$.
If the minimum is attained at some $a_j$, then the rest is still differentiable and we must have $\left|s\frac{\bar\zeta}{|\zeta|}+\sum_{i:i\ne j}\frac{\overline{\zeta-a_i}}{|\zeta-a_i}\right|\le 1$ (otherwise we would better deviate from $a_j$), so we can put $z_j$ to be that sum with the same rule for other $z_i$ and get the same result. This is the funny case where one variable will be not at the boundary of the unit disk.
The outcome is that we always have $F(s)=\min_\zeta G_s(\zeta)$, that minimum can be computed in decent time (you may even look online for the best algorithm to minimize the weighted sum of the distances to points; I'm sure that is a well-studied problem) and once we find the minimizer $\zeta$, we can recover the corresponding $z_i$ by explicit formulae.
Note also that $G_s(\zeta)$ is linear in $s$, so $F(s)$ is concave (the minimum of a convex combination of two functions is at least the convex combination of the individual minima). On the other hand, $s\mapsto\sqrt{e+s^2}$ is convex. Thus the ratio $\frac{F(s)}{\sqrt{e+s^2}}$ and, thereby, the ratio $\frac{F(s)^2}{e+s^2}$ is unimodal on $[0,s_0]$ (I wonder if this simple fact is still taught to calculus or numerical analysis students). Thus, we can find the maximum of this ratio on $[0,s_0]$ using the standard bisection algorithm (or, if you are in the mood for it, you can try the differential evolution again though I don't think it will give you much advantage here).
The moral is that the problem is solvable in a decent time even for large $n$ unless you want it superfast, in which case one may want to think more.
I hope this helps somewhat. Feel free to ask questions if anything is unclear.