My question is not very long, but I'd like to explain where it comes from. Consider the classical definition of vector spaces:
$E$ is said to be a vector space over a field $F$ when:
- A) E is a commutative group.
- B) There is an scalar multiplication satisfying $1.x=x$ and $(\alpha\beta).x=\alpha.(\beta.x)$
- C)
- $\alpha.(x+y)= \alpha.x + \alpha.y$ (scalar multiplication distributivity)
- $(\alpha+\beta).x= \alpha.x + \beta.x $ (vector multiplication distributivity)
This is the definition that can be found in Wikipedia, or in Halmos' Finite Dimensional Vector Spaces. While reading the latter, I was rather surprised by the following disclaimer:
"These axioms are not claimed to be logically independent; they are merely a convenient characterization of the objects we wish to study."
Some rather trivial examples (setting stuff like $\alpha.x= \alpha^{2}x$) show that:
- A) and B) are not logically connected
- Neither C)1. or C)2. are implied by A) together with B)
- A), B), C)1. together do not imply C)2.
Now the tricky question is: do A), B), C)2. together imply C).1 ??
If $F=\mathbb{Q}$ the answer is yes. In a nutshell this is because $n.x = x+x+...x$ so that indeed $n.(x+y)=n.x +n.y$, and \begin{equation*} x+y=m.(m^{-1}.x+m^{-1}.y)) \end{equation*} so that $m^{-1}.(x+y)=m^{-1}.x +m^{-1}.y$. Putting both together with scalar associativity gives C)1.
What happens for general $F$ ? I have not managed to find a counter example, and have no idea how to extend the proof above.
Thanks for your help
The axioms are not logically independent, in the sense that you do not need to assume that $E$ is commutative; you can prove that from the other axioms.