Given the definition of big-O notation, which is:
if $|f(n)|\le Mg(n)$ for all $n\ge n$$0$ for some $M\in$${\rm I\!R}$, then $f(n)$ is $O(g(n))$
I am confused because I was reading about recursive functions that are big-O of a function, so long as we restrict the domain of values. For example, $f(n)=af(n/b) + c$ is $O(a$$logn$$)$ whenever n is a multiple of b. In the case where $n$ is not an integer multiple of $b$, the function becomes undefined as the base case f(1) cannot be reached. While one can prove that $f(n) \le Mg(n)$ $\forall n>n$$0$ so long as $n$ is an integer multiple of $b$, does this not clash with the definition of big-O notation? Because we have not proven that every $n$ satisfies the definition, but we restricted the domain of $n$ such that it is not every $n\ge n$$0$, but a subset of $n\ge n$$0$ in which $n$ is a multiple of $b$. I need help understanding how this fits the formal definition of big-O. Any help is greatly appreciated
Writing $f(n)=af(n/b)+c$ is just informal sloppiness. If you want to be precise and pedantic then your recursion actually is $$f(n)=af(\lfloor n/b \rfloor)+c$$ or the same with ceil instead of floor, or some combination of both, so your function is well-defined for all $n$.
The reason we allow ourselves the sloppiness is that under mild assumptions the asymptotics follows form behavior for $n$ which are powers of $b$ so it doesn't really matter how you round the argument of $f$.