In my CS course, they have taught us that, when proving Big-Oh, you can choose any positive integers to be C and k, following the definition.
Based on that, they have taught us two different ways of proving a big-oh problem, for example:
Let $f(n) = 10n+5$, and $g(n) = n$. Prove that $f(n)$ is $O(g(n))$
Method 1:
Try C = 15
now show that $$10n+5 \leq 15n$$
$$2n + 1 \leq 3n$$ $$1 \leq n$$
so we can say k = 1 and thus n $\geq$ , hence $f(n)$ is $O(g(n))$ for C = 15 and k = 1.
Method 2
try k = 1
Assume $n \geq k$
Now show that $$(10n+5)/n \leq C$$
$$(10n+5n)/n \leq C$$ $$15n/n \leq C$$ $$C = 15$$
hence $f(n)$ is $O(g(n))$ for C = 15 and k = 1.
My issues
Is there a mathematical reason why we can choose C and k in these methods to be any number? Why does any value of C and k always work?
Why, in method 2, do we simply multiply 5 by n? How does this work? What is the reasoning behind this step?
It all comes down to what the definition of $f(n)=O(g(n))$.
Definition A non-negative function $f(n)$ is Big-Oh of $g(n)$, written $f(n)=O(g(n))$, if there exist constants $N$ and $C$ so that for all $n\geq N$ it follows that $f(n)\leq Cg(n)$.
The ability to choose $N$ and $C$ at will is because you only have to show that the inequality works for one choice. If the inequality holds for all $n\geq N$ (the $N$ you chose with the $C$ you chose) then the definition is satisfied.