I don't think I'm understanding Big O notation. I'm looked up its definition:
T(n) is O( f(n)) if there exists constants c, n-not > 0 such that T(n) <= f(n) for all n > n-not.
I have to make sure f(n) <= c * g(n).
I'm just not sure how to apply this to an actual problem.
For example, in this problem, where log is base 2:
logf(n) is O(log(g(n)))
I have to either prove the statement is correct or give a counter-example that disproves it's correct.
Based on the formula above, I started out with this:
logf(n) <= log(c*g(n))
Assuming I'm placing the constant in the correct location (I wasn't sure about that), I then do this:
logf(n) <= log(c) + log(g(n))
However, I don't know where to go from here or if this is the correct line of reasoning.
Can anyone help explain big O through the use of this problem? I've found the solution to my problem online but it doesn't explain the process clearly enough.
EDIT: This problem was solved below, but it's not really conducive to my learning.
Another problem from this review sheet would be this:
f(n^2) is O(g(n^2))
Like before, I do this:
f(n^2) <= c * g(n^2)
However (like before...) I cannot tell if there are functions that cause this to fail.