Suppose that I want to maximize a quantity \begin{equation*}\frac{f_1+f_2}{f_3+f_4}\end{equation*} where $f_1,f_2,f_3,f_4\in\mathbb{R_{\geq 0}}$ have to satisfy some given conditions.
Suppose that I know $f'_1,f'_2,f'_3,f'_4\in\mathbb{R_{\geq 0}}$ such that they satisfy my conditions and such that:
- $\frac{f'_1}{f'_3}$ is the maximum value of $\frac{f_1}{f_3}$ under my conditions;
- $\frac{f'_2}{f'_4}$ is the maximum value of $\frac{f_2}{f_4}$ under my conditions.
Can I say that \begin{equation*}\frac{f'_1+f'_2}{f'_3+f'_4}\end{equation*}is the maximum value of\begin{equation*}\frac{f_1+f_2}{f_3+f_4}\end{equation*}under my conditions? If yes, why?
My other answer gives a counter-example, in this answer I give an alternative approach for such maximizations. This is related to a set of "renewal optimization" problems I had thought about in this paper (my answer here is related to Lemma 5):
http://ee.usc.edu/stochastic-nets/docs/renewal-optimization.pdf
Suppose we have the general problem: \begin{align} \mbox{Maximize:} \quad & \frac{f_1+f_2}{f_3+f_4}\\ \mbox{Subject to:} \quad & (f_1, f_3) \in A \\ & (f_2, f_4) \in B \end{align} where $A$ and $B$ are given subsets of $\mathbb{R}^2$. For simplicity, let's assume $A$ and $B$ are compact subsets, and all vectors in set $B$ have strictly positive components (this ensures a maximum exists, and we do not have divide-by-zero issues). Let $(f_1^*, f_3^*)\in A, (f_2^*, f_4^*)\in B$ be a (possibly non-unique) optimum for the above problem, and define $\theta^*$ as the maximum objective value: $$ \theta^* = \frac{f_1^*+f_2^*}{f_3^*+f_4^*}$$ For each $\theta \in \mathbb{R}$, define the function: $$ h(\theta) = \max_{(f_1, f_3) \in A, (f_2,f_4)\in B}[(f_1+f_2)-\theta(f_3+f_4)]$$ The interesting thing is that for each given $\theta \in \mathbb{R}$, the $h(\theta)$ function is easy to compute by separately maximizing $(f_1-\theta f_3)$ over all $(f_1,f_3) \in A$ and $(f_2-\theta f_4)$ over all $(f_2,f_4)\in B$.
Claim 1:
$h(\theta)>0$ if and only if $\theta < \theta^*$.
$h(\theta) < 0$ if and only if $\theta > \theta^*$.
$h(\theta) = 0$ if and only if $\theta = \theta^*$.
This claim means that if we can bracket the optimal value $\theta^*$ by some known upper and lower bounds, then we can perform a bisection search by testing different $\theta$ values, getting new upper and lower bounds on $\theta^*$, and zero in on the optimal $\theta^*$ exponentially fast. Each step of the search requires a simple separate maximization.
Now let $T_{min}$ be the minimum value of $f_3+f_4$ over all $(f_1,f_3)\in A$ and all $(f_2,f_4)\in B$, and note by assumption that $T_{min}>0$.
Claim 2: Fix $\theta \in \mathbb{R}$. Let $(\hat{f}_1(\theta), \hat{f}_3(\theta))\in A, (\hat{f}_2(\theta),\hat{f}_4(\theta))\in B$ be maximizers corresponding to the definition of $h(\theta)$. Then: $$ \frac{\hat{f}_1(\theta)+\hat{f}_2(\theta)}{\hat{f}_3(\theta) + \hat{f}_4(\theta)} \leq \theta^* \leq \frac{\hat{f}_1(\theta)+\hat{f}_2(\theta)}{\hat{f}_3(\theta) + \hat{f}_4(\theta)} + \frac{|h(\theta)|}{T_{min}} $$ This means that if we find a value $\theta$ such that $h(\theta) \approx 0$, then the greedy maximizations to find $h(\theta)$ result in a solution that is a close approximation to the solution of the original problem.
The proof of claims 1 and 2 are similar and are related to the proofs I gave in the above paper. Here is a quick proof of Claim 2:
Proof (Claim 2): The first inequality holds by definition of $\theta^*$ as maximizer of the desired ratio. To prove the second inequality, define \begin{align} n &= \hat{f}_1(\theta) + \hat{f}_2(\theta)\\ d &= \hat{f}_3(\theta) + \hat{f}_4(\theta) \\ n^* &= f_1^* + f_2^*\\ d^* &= f_3^* + f_4^* \end{align} Notice that $d\geq T_{min}>0, d^*\geq T_{min}>0$, and by definitions: $$ n - \theta d = h(\theta) \geq n^* - \theta d^* $$ So $$ \theta^* = \frac{n^*}{d^*} \leq \theta + \frac{h(\theta)}{d^*} = \frac{n-h(\theta)}{d} + \frac{h(\theta)}{d^*} = \frac{n}{d} - \frac{h(\theta)}{d} + \frac{h(\theta)}{d^*}$$ and the right-hand-side is less than or equal to $n/d + |h(\theta)|/T_{min}$ if $h(\theta)\geq 0$, likewise when $h(\theta)<0$. $\Box$