Uniqueness of additive identity element of vector space (or group or monoid)

16.4k Views Asked by At

Please rate and comment. I want to improve; constructive criticism is highly appreciated. Please take style into account as well.

Proof of uniqueness of identity element of addition of vector space

This proof is solely based on vector space axioms. Axiom names are italicised. They are defined in Wikipedia (vector space).

Let $V$ be a vector space. We prove the uniqueness of an identity element of addition (IEOA). By Identity element of addition, there exists an IEOA. Let this element be denoted by $0$. For the sake of contradiction, we assume that the IEOA is not unique. That is, there exists an IEOA $0'$ such that $0' \ne 0$. Obviously, $V \ne \emptyset$. Let $v \in V$. By Identity element of addition, $v + 0 = v$ and $v + 0' = v$. Hence, $$v + 0 = v + 0'.$$ By Commutativity of addition, $$0 + v = 0' + v.$$ By Inverse elements of addition, there exists an additive inverse of $v$. Let $-v$ denote the additive inverse of $v$. Due to the foregoing equality, $$(0 + v) + (-v) = (0' + v) + (-v).$$ By Associativity of addition, $$0 + (v + (-v)) = 0' + (v + (-v)).$$ By Inverse elements of addition, $$0 + 0 = 0' + 0.$$ By Identity element of addition, $$0 = 0'.$$ The foregoing equality contradicts our assumption. Thus, our assumption is false and its negation is true. That is, the IEOA is unique. QED

P.S.: I wrote another version of the proof, which is less roundabout.

5

There are 5 best solutions below

0
On

This is "too thorough", to put it nicely. One wonders why you just didn't start with $0' + 0 = 0' + 0.$ and conclude $0 = 0'$, using commutativity if you felt necessary.

As another general piece of advice, at the end of each proof by contradiction, look back and see if you can't write it as a straightforward "direct proof" that doesn't use contradiction. This particular proof is a good candidate for that. Unnecessarily using contradiction proofs can lead to proofs that are more complex than necessary.

That said, proofs by contradiction are sometimes the way to go, or else lead you to a method for a direct proof, so they have their uses.

6
On

The only vague part is "Let $v \in V$." This sounds like you're going to prove something is true for all elements of the vector space. But for your purposes, any element $v$ will lead to the contradiction and be enough for your proof. So I might write instead "Choose any $v \in V$."

[But there's a simpler proof that doesn't need that extra variable $v$ at all:

Suppose $0$ and $0'$ are two different additive identities of $V$.

Since $0$ is an additive identity,

$$ 0 + 0' = 0' $$

Since $0'$ is an additive identity,

$$ 0 + 0' = 0 $$

Therefore

$$ 0 = 0' $$

Contradiction; there cannot be more than one additive identity.]

3
On

I have only one serious criticism, which is that this isn't really a proof by contradiction - if you delete the assumption that $0\ne0'$ (which you never use) and the comment that you obtain a contradiction, then your proof is still correct. It's usually clearer not to use contradiction when you don't have to.

As rschwieb points out, this is really longer than it needs to be. When you start out, it's not unusual to list every axiom you use (you may even be taking a class that is asking you to do this), but you should stop quickly, because this just makes your proof difficult to read, and makes simple results look complicated.

Very minor things: "previous" is probably a more standard word than "foregoing", although this is of course a matter of taste. I have a small objection to "obviously $V\ne\varnothing$", particularly when the rest of the proof is so careful - $V$ is non-empty by the existence of the identity element! (At this point in your proof, you already have us considering two distinct elements of $V$, so it's past the time for remarking that $V$ is non-empty!). As the other answers point out, you can get round this entirely by never choosing an arbitrary element of $V$.

0
On

That argument is a bit roundabout. Below I explain a general way to discover a simpler proof (vs. pulling it out of a hat like magic). The key idea is very simple: we can discover consequences of identities (axioms) by "overlapping" them, i.e. looking for a "unified" term that they both apply to. Let's try that here. Suppose $\,0\,$ and $\,0'$ are both additive identities. This means that for all $\,x,y$

$$\begin{eqnarray} x = && \color{#0a0}0 + \color{#c00}x\\ && \color{#0a0}y + \color{#c00}{0'} = y_{\phantom{|_|}}\\ \hline \!\!\Rightarrow\ \ 0' = && 0 + 0' = 0^{\phantom{|^|}}\end{eqnarray}\!\!\!\! $$

We chose the values of the specialization $\,\color{#0a0}{y=0},\ \color{#c00}{x = 0'}\,$ in order unify $\,0+x\,$ with $\,y+0', \,$ yielding a "unified" term $\,0+0'\,$ that both axioms apply to, so we can rewrite it in two ways: specializing $\,\color{#c00}{x\!=\!0'}$ in the first axiom $\Rightarrow \color{#c00}{0'} = 0+\color{#c00}{0'},\,$ and $\,\color{#0a0}0+0' = \color{#0a0}0\,$ by $\,\color{#0a0}{y\!=\!0}\,$ in the second.

Remark $\ $ This is a very widely applicable method of deriving consequences of axioms, i.e. by "unifying" or "overlapping" terms of both so that both axioms apply, yielding a rewriting of the term in two different ways (e.g. another example). In fact in some cases it can be used to algorithmically derive all of the consequences of the axioms, so yielding algorithms for deciding equality, e.g. see the Knuth-Bendix equational completion algorithm and the Grobner basis algorithm, and see George Bergman's classic paper The Diamond Lemma in Ring Theory.

1
On

Why do you prove it in the particular case of a vector space? You only need $(V,+)$ to be a group, $0$ being by definition its identity.

Let $G$ be a group of identity $e$. If there exists $f$ such that for all $a \in G$ $af=fa=a$ ¹ ; since $ef=e$ (from the definition of $f$) and $ef=f$ (from the definition of $e$), $e=f$.

So you don't need a vector space structure, or even the commutativity of $+$ ².


1. So, another identity
2. And the hypotheses can obviously be made even weaker