A question on Goldrei Theorem 3.13 : prove that if $n>m$ and $a>0$ then $a\cdot n>a\cdot m$

132 Views Asked by At

This question is about the proof of Theorem 3.13 b) in Goldreis' "Classic Set Theory":
For all natural numbers $n,m,a$, if $a>0$ and $n>m$ then $a\cdot n>a\cdot m \quad(1.)$

It has previously been established that if $m<n$ then $m+a<n+a \quad (2.)$.

$n^+$ denotes the "successor" $S(n)$ of any natural number $n$ (n+1 if you will).

Goldrei writes that for all $n \leq m$ the statement $(1.)$ holds vacuously, so "...that the smallest $n$ for which there are anything significant to prove, namely because $m<n$, is $m^+$.". He proceeds with proving $(1.)$ when $n=m^+$.
This is where my first question comes from, because I don't think proving this "base case" is necessary, precisely because $(1.)$ holds vacuously for all $n\leq m$. Even if these cases are true vacuously they are still true, right? So we could just use $n=0$ as the base case.

Then he aims to prove $(1.)$ by induction with the induction hypothesis that $(1.)$ holds for $n > m$ in the following way: $a\cdot n^+ = (a\cdot n) + a > (a\cdot n) + 0\quad(2.) = a\cdot n > a\cdot m$ (by induction hypothesis)

My issue here is that the last step using the induction hypothesis assumes that $n > m$ but the only thing we know from $n^+ > m$ is that $n\geq m$. Of course this is not a big problem since it is sufficient that $a\cdot n = a \cdot m$ for $a\cdot n^+ > a \cdot n = a \cdot m$

So the big question for me is was the base case with $n=m^+$ even necessary?

1

There are 1 best solutions below

2
On BEST ANSWER

There is a version of the principle of induction that is more commonly seen in set theory than in algebra or analysis:

Let $P(x)$ be a statement "about $x$ ". That is, there are no free variables occurring in $P(x)$ except possibly $x.$ Let $<$ be a well-order on a set $A.$ Suppose that $$(*)\quad [\;\forall y<x\;(P(y)\;]\implies P(x) \quad \text { holds for every } x\in A.$$ Then $\forall x\in A\;(P(x)).$

Proof: Suppose not. Then there exists a $<$-least $x_0\in A$ such that $\neg P(x_0).$ Then we have $\forall y<x_0 \;(P(y)),$ which by $(*)$, implies $P(x_0),$ a contradiction.

This is called trans-finite induction. When $A=\Bbb N$ with the usual order $<$ it is called strong induction.

An example: Let $P(x)$ be the statement that if $x$ is a prime number and $x\not \equiv 3 \mod 4$ then there exist $u,v\in \Bbb Z^+$ with $x=u^2+v^2.$ . There is a proof of $\forall x\in \Bbb Z^+\;(P(x))$ in which the "theme" is proving that $$(**)\quad \forall x\in \Bbb Z^+ \; (\;[\forall y\in \Bbb Z^+(y<x\implies P(y))] \implies P(x)\;)$$ and at no point in the whole proof is the "base case" $2=1^2+1^2$ (nor the "odd base case" $5=1^2+2^2 $) ever explicitly mentioned. (Of course in $(**)$ the only real work is on the case where $x$ is prime and $x\not\equiv 3 \mod 4.)$

On the other hand, for induction on $\Bbb N,$ having $\forall x\in \Bbb N \;(P(x)\implies P(x+1))$ is not enough to conclude that $\forall x\in \Bbb N\;(P(x))$ without also having $P(0).$ For example if $P(x)$ is $ x=x+1$ .

For induction on $\Bbb N$ these two versions are logically equivalent because if $A\ne \emptyset,$ then $(*)$ implies that $P(\min_< A)$ holds. However the second ("weaker" ) version is often easier to work with, especially in algebra when $P(x)$ is a "formula". E.g. if $P(x)$ is $x^2+x=\sum_{j=0}^x2j.$

Pierre de Fermat (early 1600;s) used induction; he called it "infinite descent".