In "real life," you definitely can't multiply (ignoring dot and cross product) vectors of a different dimension (like you can't multiply a 2D geometric vector with a 3D one, the dimensions don't match up so it would have no meaning) - but you most definitely can multiply two different polynomials, like x and x^2 - just add the exponents to get x^3.
But by the laws of linear algebra, multiplying two vectors together is illegal for all vectors, whether they be geometric or polynomials. They can only be added together or multiplied by scalars. You can't multiply two columns of a matrix, for example. Just add them together in different proportions.
Why the inconsistency? Is this just some artificial restriction imposed on polynomials to analyze them in a specific way, or is it a property of polynomials that I'm misunderstanding?
Either way, can you please explain why? Thanks.
You can map polynomials to vectors, e.g. $$ \phi : P_n[x] \to \mathbb{F}^{n+1} \\ \phi \left( \sum_{k=0}^n a_k x^k \right) = (a_0, \dotsc, a_n) $$ and do linear algebra with those vectors. (I identify the vectors of coordinates with the vectors for simplicity)
On the other hand you can do this with other, different objects as well, e.g. for linear cost functions $$ \Psi: (\mathbb{F}^{n+1})^* \to \mathbb{F}^{n+1} \\ \Psi(c^\top x) = \Psi \left( \sum_{k=1}^{n+1} c_k x_k \right) = (c_1, \dotsc, c_{n+1}) $$ This is abstraction at work, we leave out the details and just focus on the vector aspect here.
This does not mean that the original objects might not have additional features.
So a useful multiplication between polynomials to polynomials is such an additional feature that polynomials possess, but that not all other objects that can be mapped to vectors, like the cost functions above, or geometric vectors, share. This is not a bad thing.