Confusion regarding how Big O notation is used in practice

75 Views Asked by At

I'm trying to work through a pretty basic regular expansion problem in my textbook but I'm getting tripped up by the use of Big O notation.

I understand the formal definition of Big O notation, but here it seems to be being used to denote terms on the order of what the argument of the Big O is, and I don't really understand how this is related to the formal definition. Most of the examples I can find of Big O problems talk about "finding" the Big O notation for some function by proving something like, for example, $f(x) = \mathcal{O}(x^{4})$, using the formal definition. This seems to me to imply that a function has only one Big O notation.

However, in this example in the book, as well as most examples I remember encountering in the past, they're taking several different Big O's of the equation by comparing the $\mathcal{O}(1)$, or $\mathcal{O}(\epsilon)$, or $\mathcal{O}(\epsilon^{2})$ terms on either side of the equation to make sure they cancel out so that the expansion can be solved. At first this seemed to make sense to me. It seemed like the $\mathcal{O}(1)$ terms were all the ones without any factors of $\epsilon$, the $\mathcal{O}(\epsilon)$ terms were all the ones with a factor of $\epsilon$, the $\mathcal{O}(\epsilon^{2})$ terms were all the ones with a factor of $\epsilon^{2}$, and so on and so on. But once they got to $\mathcal{O}(\epsilon^{2})$ they started getting results I wouldn't expect if my assumptions were correct, so I went back to the formal definition to try to figure out what was going on, and now I'm even more confused.

Can someone give me, or at least point me to, some explanation of how Big O notation is applied in this context, and possibly even how the formal definition makes this kind of application possible.

EDIT: Although I'm still not 100% solid on this application of Big O notation, I've just confirmed that my primary source of confusion in the instance was caused by a typo in the textbook.