When two ordinals, $\alpha$ and $\beta$, are multiplied together, $\beta$ is taken as the most significant multiplicand and $\alpha$ as the least significant multiplicand in the product $\alpha \cdot \beta$. Thus if we take two well-ordered sets that represent these ordinals, the elements of their Cartesian product are compared by the second component first and by the first components last, i.e. in reverse lexicographical (colexicographical) order. This looks unnatural for the following reasons:
It's more natural to say that $2 \cdot \omega$ is two infinite sequences, i.e. $\omega + \omega$ and not an infinite collection of pairs, i.e. $2 + 2 + 2 + \dots = \omega$, which is currently assigned to $2 \cdot \omega$.
When we compare two natural numbers knowing their representations in decimal or some other numeral system, we compare their digits from left to right, provided they have equal number of digits (that can always be achieved by left zero-extending, if needed). So why to change this rule for ordinals?
When a simple Cartesian product $A \times B$ (of unordered sets) is defined, the elements from $A$ and $B$ are written out in pairs in the same order as the order of the multiplicands in the product was, with no reverse. Moreover, the total order on the Cartesian product of totally ordered sets is introduced as a simple lexicographical order. Why would one need to reverse it for well-ordered sets?
One may say that there are other examples of such reverse notation in mathematics. Take, for example, $f \circ g$ for the composition of functions, the second of which is applied first. However, there is a reasonable explanation for this: as $f \circ g (x)=f(g(x))$, the reverse order of application in $f \circ g$ was chosen to resemble that of ordinary parenthesized notation $f(g(x))$. What was the reason to reverse it in a product of ordinals?
Reason 1: Because it coincides with the inductive definitions that is very natural.
Reason 2: Because it is just as natural that $2\cdot\omega$ means "something of type $2$ repeated $\omega$ times", which turns out to just be equivalent to $\omega$ itself.
Let me just point out that there is no real reason for any notational convention. It's all based on historically, aesthetically and other traditions. These can change if we choose to agree and change them. (For example, in set theory, modern day $L(\Bbb R)$ used to be $L[\Bbb R]$, which today means something else.)
No one is saying that there is no way to do that. Sure there is. You just did it by saying "Hey, let's define it the other way around". The question is how well does this definition play with the rest of the notation.
That is debatable, and since there is something good in having a uniform convention when defining increasingly more complicated operations from a successor, to addition, to multiplication, to exponentiation, etc., changing the way we define multiplication is probably not going to catch on any time soon.