Donald Knuth "The Art of Computer Programming" Section 1.2.1 has an exercise No.2, the solution to which I don't understand. Exercise:
There must be something wrong with this proof. What is it? Theorem. Let $a$ be any positive number. For all positive integers $n$ we have $a^{n-1}=1$. Proof. If $n = 1, a^{n-1}=a^{1-1}=a^0=1$. And by induction, assuming that the theorem is true for $1, 2, ..., n$, we have $a^{(n+1)-1}=a^n=\frac{a^{n-1}\times a^{n-1}}{a^{(n-1)-1}}=\frac{1\times1}{1}=1$; so the theorem is true for $n+1$ as well.
As far as I understand the proof by induction, the solution should be approximately following:
$a^{1-1}=1$
Assume that $a^{n-1}=1$;
Therefore $a^{(n+1)-1}=1\times a$;
So $a^{(n+1)-1}$ does not equal $a^{n-1}$ and does not equal $1$, unless $a=1$
The solution to exercise reads:
The theorem has not been proved for $n=2$; in the second part of the proof, take $n=1$; we assume there that $a^{-1}=1$. If this condition is true (so that $a=1$), the theorem is indeed valid.
I want to compare it to another, correct, proof by induction in the example from here:
Adding up Odd Numbers $1+3+5+...+(2n−1)=n^2$
- Show it is true for $n=1$
$1=1^2$ is True- Assume it is true for $n=k$
$1+3+5+...+(2k−1) = k^2$ is True (An assumption!)- Now, prove it is true for $(k+1)$
$1 + 3 + 5 + ... + (2k−1) + (2(k+1)−1) = (k+1)^2$
We know that $1 + 3 + 5 + ... + (2k−1) = k^2$ (the assumption above),
so we can do a replacement for all but the last term:
$k^2 + (2(k+1)−1) = (k+1)^2$
Now expand all terms:
$k^2 + 2k + 2 − 1 = k^2 + 2k+1$
And simplify:
$k^2 + 2k + 1 = k^2 + 2k + 1$
They are the same! So it is true.
So: $1 + 3 + 5 + ... + (2(k+1)−1) = (k+1)^2$ is True
Returning to the first example, I didn't quite understand the meaning of the phrase
The theorem has not been proved for $n=2$;
In the second (correct) proof there is no proving that it holds for $n=2$, despite covering the same topic. And Knuth himself has the proof of the theorem $1+3+5+...+(2k−1) = k^2$ similar to the one above, where he too doesn't prove it for $k=2$. So what did he mean by this phrase? Does he mean that there already is an exception in the rule stated when we take arbitrary $n$, for example $n=2$, so therefore it wouldn't hold for all examples anyway? But taking arbitrary numbers and putting it into equation instead of $n$ doesn't seems to be an effective and systematic way to debunk such proofs. How would you explain what is wrong with the exercise?
To understand what's happening in the first example, it suffices to substitute actual values for $n$. We clearly have no argument with $a^0 = 1$, which corresponds to the case $n = 1$. But when $n = 2$, the inductive reasoning step looks like this:
$$a^{2-1} = \frac{a^{1-1} a^{1-1}}{a^{0-1}} = \frac{a^0 a^0}{a^{-1}}.$$ So the next step fails immediately because the base case begins at $n = 1$, not $n = 0$; that is to say, $a^{-1} \ne 1$ as required for the inductive step to remain true.
The reason why the second example does not suffer from this defect is because the inductive step does not rely on an incorrect assumption about an earlier case being true. You don't have this problem in your example because all of the algebraic steps are valid and supported by the induction hypothesis. In the first example, this is not so, because the very next case is assuming that $a^{-1} = 1$.
Indeed, if $a = 1$, thereby satisfying this assumption, then it is true that $a^{n-1} = 1$ for all $n$. The way in which the inductive step fails actually illustrates precisely when the theorem is true.