Understanding polynomial-growth condition

646 Views Asked by At

I'm reading Tom Leighton's 1996 paper: "Notes on Better Master Theorems for Divide-and-Conquer Recurrences". It contains a definition like this (I simplified slightly):

We say that $g(x)$ satisfies the polynomial-growth condition if for all $b \in (0,1)$ there exist positive constants $c_1, c_2$ such that for all $x \geq 1$ and all $u \in [bx,x]$, $$c_1g(x) \leq g(u) \leq c_2g(x)$$

My questions:

  1. For non-negative $g$, if the condition $u \in [bx,x]$ is replaced by $u \in [\max(bx,m),x]$ with the requirement that there exists such a constant $m$ independent of $x$, is this modified polynomial-growth condition on $g$ equivalent to $g \in \Omega(1) \cap Ο(P)$ for some polynomial $P$?
  2. Does the polynomial-growth condition, original or modified, or the above condition imply that for all $r \in \mathbb R$ and all $x \geq 1$ the integral $\int_1^x \frac{g(u)}{u^r}du$ exists and is finite for non-negative $g$?
  3. Is the polynomial-growth condition modified as in 1. (which makes it weaker) sufficient to prove Theorem 1 and Theorem 2 of the paper?