Growth rate of a function vs rate of change of a function

1.1k Views Asked by At

In lots of literature and article authors refer to the growth rate of a function, but nobody tells what it really is.

Currently I should study math in english, but it is not my native language. That's why it is almost impossible for me to find a clear definition and explanation for growth rate

At first glance I was thinking that rate of change is the same thing as a growth rate.

But lately I figured out that rate of change is derivative of a function, for example if we have a function

$$f(n) = n^2 + 2 * n$$

Then rate of change of f(n) will be:

$$f'(n) = 2*n + 2$$

I have read tons of article and now for me growth rate means something like this:

If we want to compare two functions T1(n) and T2(n) when n tends toward infinity (for example using L'Hospital's Rule), function who approaches infinity faster will have higher growth rate.

So it means that growth rate is not a number or a term in a function. It just relation between functions that shows what function approaches to infinity faster then others.

My question is: Are my suggestions about growth rate and rate of change correct?

1

There are 1 best solutions below

5
On BEST ANSWER

Yes, this sounds roughly correct. I am not sure whether "functions have the same growth rate" has a precise, universally accepted meaning -- it rather means that they are in the same equivalence class of some relation, where the relation depends on our application.

For example, in computer science, when talking about time/memory usage as a function of the size of the input, we usually abstract from factors bounded by a constant, so $f$ and $g$ have the same growth rate (denoted as $f = \Theta(g)$) if $cg \leq f \leq Cg$ for some constants $c,C>0$ and parameters large enough. See: https://en.wikipedia.org/wiki/Big_O_notation This lets us abstract any polynomial as $\Theta(n^k)$. This is because such constant factors are hard to reason about, and depend on the hardware anyway, so there is no point to perform 10x as much work to compute them; on the other hand, abstracting from such minor differences leads to a beautiful theory, and the use of asymptotics lets us to abstract from irrelevant details and thus make many mathematical proofs much easier.

In some situations we would use finer or coarser equivalence classes -- for example we could abstract from logarithmic factors ($\tilde{O}$ notation), or consider $2^n$ and $3^n$ to be same ($3^n = 2^{\Theta(n)}$), or only abstract from summands bounded by a constant ($f(n) = O(1) + g(n)$)).