When we talk about the computational cost of doubling and points addition we usually give a result in terms of field multiplication M and field squaring S. For example, here we can read points addition is about 9M + 1S , and doubling is about 3M+4S.
Why is the standard notation (with Big-O) not used? It's because multiplication over $\mathbb{R}$ has a different cost with respect to multiplication over $\mathbb{F}_q$ for example?
However, Koblitz, in his book "A Course in Number Theory and Cryptography" says (pg 178 Prop VI.2.1):
Note that there are fewer than 20 computations in $\mathbb{F}_q$, (multiplications, divisions, additions, or subtractions) involved in computing the coordinates of a sum of two points by means of equations (4)-(5) (these formulas describe points addition of an elliptic curve in Weierstrass short form). Thus, by Proposition 11.1.9, each such addition (or doubling) of points takes time $O( \log^3 q)$.
Here, Koblitz describes the cost using Big-O notation, why he doesn't involve M,S notation?
Moreover, I know that using different coordinates one can obtain different costs for points addition (and doubling). In a certain set, one can have 9M + 1S, in another 10M + 1S. However, in $\mathbb{F}_q$ the cost should be always $O( \log^3 q)$ as Koblitz suggested. So why do we underline the different costs using M and S if they are always $O( \log^3 q)$ in $\mathbb{F}_q$?
Maybe I'm wrong, but if I perform 10 multiplications, each of those costs $O( \log^3 q)$, the total cost is $O(10 \cdot \log^3 q) = O( \log^3 q)$ right? Furthermore, I obtain the same total cost using 11 multiplications.
I know $O (\cdot)$ is only a boundary, maybe in elliptic curve, we want extreme precision such that one field multiplication could make the difference?
The way I see it, Big O notation is a crutch. We use it in complexity theory when we are unable to precisely pin down the runtime of an algorithm, or possibly also when we consider it too cumbersome to give a precise number. The fact that you see it so often is a mirror of how hard this business is. So what I am trying to say is: I would not call the estimate $10\cdot\log_2(q)$ one of "extreme" precision, while $\mathcal{O}(\log(q))$ is the right amount of composure - on the contrary, I would always prefer the former to the latter.
That said, big O notation can be a convenient way to give an impression, a ballpark estimate of where the runtime of a certain algorithm lies. I have not read Koblitz book, but it could also be that he is not trying to do more than that. Also, it may be worth noting that Koblitz' statement is very likely not going to change, even if we find better algorithms for point addition. I don't have a lower bound proof, but I would be very surprised if you could do it in (significantly) less than $\log(q)$.
Now for elliptic curves and cryptographic primitives in general, I will also note that there has been a significant interest in how these algorithms can perform on embedded devices with limited computational resources, such as smart cards. For these environments, it may be very relevant whether it's $10$ or $8$ field operations. Keep in mind that in reality, you will agree on a $q$ pretty early on - and once you have fixed the $q$, it all comes down to constants. There may even be dedicated hardware to get the $B$ in $\log_B(q)$ as big as possible.