Approximating minimum of $f(x)+g(x)$ by solving for $f(x)=g(x)$

72 Views Asked by At

I'm trying to make sense of the following argument from a paper I'm reading.

The author has the following upper bound: $$R(T) \le N + O\left(T\sqrt{\frac{\log T}{N}}\right)$$

They then state the following:

"Recall that we can select any value for $N$ so long as it is known to the algorithm before the first round. So we can choose $N$ so as to (approximately) minimize the right-hand side. Noting that the two summands are, resp., monotonically increasing and monotonically decreasing in $N$, we set $N$ so that they are (approximately) equal."

So, the general argument seems to be that, given two functions, $f(x)$ and $g(x)$, where $f(x)$ is monotonically increasing and $g(x)$ is monotonically decreasing, if we can find the solution, $y$, to $f(y)=g(y)$, then we have $\min_x\{f(x)+ g(x)\} \approx f(y) + g(y)$

But I see no reason why this should be true. In fact, if $g$ decreases faster than $f$ increases then $\min_x \{f(x) + g(x)\} \rightarrow - \infty$, right? There has to be something missing here, or something I'm not seeing.

2

There are 2 best solutions below

0
On BEST ANSWER

If you restrict your attention to nonnegative monotone functions and take $f,g:\mathbb R\to\mathbb R_{\geq0}$ so that $f$ is nondecreasing and $g$ is nonincreasing, suppose we find some $y$ so that $f(y)=g(y)$. Set $M := f(y)+g(y)$ as our "approximate" minimum for the sum.

By the nonnegativity assumption, we will have for $x>y$ that $f(x)+g(x)\geq f(x)\geq f(y) = \frac M2$ and likewise when $x<y$. I'm not sure what the context of this problem is, but this shows that using a solution $f(y)=g(y)$ in this context will always provide a $2$-approximation for the minimum of $f+g$. Since they're only speaking about an "approximate" minimum, maybe this is sufficient for their purposes?

In fact, a $2$-approximation is the best you can do this way: just take $f(x)=c$ and $g(x)$ so that $g(0)=c$ and $g(x)=0$ for $x\neq0$, then $\min(f+g)=c$ but $f(0)+g(0)=2c$.

Moreover, like you mentioned, dropping the nonnegativity constraint makes this even worse. For a concrete example, you can just take $f=0$ and $g(x)=-x$, then this approximation will output $0$ while the true minimum is unbounded below.

0
On

You are right that the general argument you are inferring is not true in general. I think I can sort of see where the authors comment might come from however.

For a function of the form $R(x) = ax +b/x$, with $a$ and $b$ constants, the extremum occurs at $a - b/x^2 =0$, i.e. $ax=b/x$. In other words, with this choice of function types, the extremum actually occurs when $f(x) = g(x)$, with $f(x) = ax$ and $g(x)=b/x$.

If we generalize this to powers of $x$, $f(x) = ax^n$ and $g(x)=b/x^m$, the location of the extremum is at the solution of $n f(x) = m g(x)$. This shows that the argument can definitely not hold exactly, since it only holds for $m=n$ but not otherwise. For more general functions, like you said, an extremum does not exist, illustrated by the trivial example $f(x) = x$, $g(x) = -2 x$.

Your concrete functions looks like the $n = 1$, $m=1/2$ case, with a corresponding extremum at $f(x) = g(x)/2$. However, since your actual expression involves a big O, you might as well forget about constant prefactors, and up to constants we do find that the extremum occurs at $f(x) = g(x)$ for these kinds of functions. I would hence interpret this comment by the author as a sort of rule of thumb for an approximate extremum, that is simple and does about as well as you can hope for with the given information. (In order to conclude that the extremum is a minimum, you need more information than is currently given.)