Classical examples of mathematical induction

9.1k Views Asked by At

What are some interesting, standard, classical or surprising proofs using induction?

Here is what I got so far:

  • There are some very standard sums, e.g, $\sum_{k=1}^n k^2$, $\sum_{k=1}^n (2k-1)$ and so on.
  • Fibonacci properties (there are several classical ones).
  • The Tower of Hanoi puzzle can be solved in $2^n-1$ steps.
  • A $2^n \times 2^n$-grid with one square missing can be covered with $L$-triominos.
  • Cayley's formula for labeled forests.
  • Every square can be subdivided into any number $n\geq 6$ subsquares.
  • The Art gallery problem.
  • Number of regions determined by $n$ lines in general position.
  • Eulers formula $V-E+R=2$.
  • Planar graphs are 5-colourable.
  • Properties about binomial coefficients (I do not count these as classical, since the proofs are very mundane and not really fun - IMHO, such identities should be proved by a bijective argument / combinatorial interpretetation).

In other words, what would you expect to see in a book titled "Induction in Mathematics", aimed for freshmen/undergraduate students?

I really like the tilings with $L$-triominos problem - it is easy to state, has a neat proof that requires some creativity.

Neat examples from calculus and probability would be appreciated.

19

There are 19 best solutions below

1
On

Every integer greater than $1$ is either prime or a product of primes. (Strong Induction)

0
On

Euler Path Theorem (it is also called as "Euler Theorem" but Euler Path Theorem includes both the existence of Euler Path and Euler Circuit). A connected graph has an Euler circuit if and only if it has no vertex with odd degree. It can be proven with strong induction as explained in here: http://mathonline.wikidot.com/euler-s-theorem

2
On

Calculating the $k$-th reduced homology of spheres, i.e. if $\mathbb{S}^n$, $n \in \omega$, denotes the $n$-sphere, then $$\begin{align*}\tilde{H}_k(\mathbb{S}^n) \cong \begin{cases} \mathbb{Z} & k = n,\\ 0 & k \neq n, \end{cases} \end{align*}$$ for all $k \in \omega$. The proof makes use of the Mayer-Vietoris sequence and induction on $n \in \omega$.

4
On

If you want to break the chocolate bar below into its $28$ individual pieces, then what is the minimum number of breaks to do this with? You can only break one chunk of chocolate into two pieces at a time.

enter image description here

The answer is that it takes of course exactly $27$ breaks, no matter how you do it. This is often surprising to people, as many think that maybe breaking the chunk into somewhat 'equal chunks', or breaking any chunk along the longest break line would somehow end up doing things faster. However, a quick and simple proof by (strong) induction shows that it has to be $n-1$ breaks for $n$ pieces.

Also, you can continue this problem with: Take the same chocolate bar as above, and once again you want to break it into its $28$ individual pieces. This time, however, you can score points every time you make a break: If you break a chunk of chocolate of size $n$ into two pieces of size $k$ and $n-k$, then you receive $k \cdot (n-k)$ points. Now: what is the maximum number of points you can get?

And again, you can prove by strong induction that no matter how you break up the bar, your total score in the end will be $\frac{n(n-1)}{2}$. Here is a proof by picture, knowing that $\frac{n(n-1)}{2}$ is the sum of all numbers $1$ through $n-1$ (i.e. triangular number $T_{n-1}$):

enter image description here

This picture shows that $T_{n-1}=k(n-k)+T_{k-1}+T_{n-k-1}$, so using the inductive hypothesis that the score you get for breaking apart a chunk of chocolate of size smaller than $n$ always gives you a score of $T_{k-1}$, no matter how you break it up, we obtain that the score of breaking the chunk with $n$ pieces gives you $k(n-k)+T_{k-1}+T_{n-k-1}=T_{n-1}$ points, no matter where you place the next break.

1
On

Here are some well-known logic puzzles, for which you use induction to prove the correctness of the answer:

Pirate's Booty

Blue-eyed Islanders

1
On

A book 'aimed for freshmen' should not miss the All horses are the same color example of a simple flaw possible in an inductive reasoning.

0
On

If you're talking induction, there's no way around the big classic of Propositional Logic.

Starting with an inductive definition of the definition of a Formula, your only technique in the new wonderland is structural induction over the definition.
It's also a nice application that shows that induction isn't limited only to things quantifiable with numbers, but rather a technique that can be used on almost arbitrary constructs.

Also a nice showcase, one of the first theorems gained by structural induction is a way to use regular induction (e.g. induction over the amount of propositional symbols, the length of the formation sequence, the number of brackets in a formula), and another one is definition by recursion, which e.g. for propositional logic states, that a function only needs to have a set behavior for all rules after which formulas can be built to be able to evaluate any formula in propositional logic (i.e. here, it needs to be defined for all atomic formulas, and if it's already defined for formulas $p,q$, it has to have a set behavior for $\quad \lnot p \quad,\quad p \land q \quad,\quad , p \lor q \quad,\quad p \implies q \quad,\quad p \Leftrightarrow q$

2
On

Derangement - that is, a permutation where none of the elements is in the original place.

https://en.wikipedia.org/wiki/Derangement

0
On

A classic example from calculus are functional equations. For example take $f:\mathbb{R} \to \mathbb{R}$ continuous such that

$$f(x+y) = f(x)f(y) \quad\forall x,y\in \mathbb{R}.$$

Then you can show that $f(x) = f(1)^x$. The way you do this is by first using induction to show this for all $x\in \mathbb{N}$, then for all $x\in\mathbb{Z}$ and finally for $x\in \mathbb{Q}$. At this point the result then follows from continuity. I always thought that this is a nice surprise since one would not expect induction to be of use in a problem on $\mathbb{R}$.

If you want to take this a bit further, there is the Jordan-von Neumann theorem, which tells us that if the parallelogram law $$\|x+y\|^2+\|x-y\|^2 = 2\|x\|^2+2\|y\|^2$$ holds in a (real) Banach space, then there exists an inner product given by the polarization identity $$<x,y> := \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2\right).$$ The proof is mostly straightforward and can be done with not much more than the definitions, however the linearity needs a similar argument of doing induction to show that $<\lambda x,y> = \lambda <x,y>$ for all $\lambda \in \mathbb{Q}$ in order to use the continuity of the norm.

1
On

Many induction proofs are nice(r) when they use well ordering, so starting from the smallest counterexample (if any). I would expect to see a thorough discussion in any book devoted to induction.

With well ordering you can tell the classic joke: every positive integer is interesting. If not, then the least boring integer is an interesting number. Be sure to distinguish between least (boring integer) and (least boring) integer.

See https://matheducators.stackexchange.com/questions/10021/why-are-induction-proofs-so-challenging-for-students/10057#10057

0
On

De Moivre's law for complex exponentiation:

$$z^n=A^n\cos(n\phi)+jA^n\sin(n\phi)$$

where $A=|z^n|, \phi= \arg z^n $ and $j$ is the imaginary unit. It follows immediately from Euler's identity, but can be proven by induction on $n$ as well.

0
On

Some of the most surprising proofs by induction are the ones in which we induct on the integers in an unusual order: not just going $1, 2, 3, \dots$.

The classical example of this is the proof of the AM-GM inequality. We prove $\frac{a+b}{2} \ge \sqrt{ab}$ as the base case, and use it to go from the $n$-variable case to the $2n$-variable case. By setting one variable equal to the mean of the rest, you can also go from the $n$-variable case to the $(n-1)$-variable case, which fills in the gaps.

There's another example which is not nearly as well-known, which is Laplace's proof of the Fundamental Theorem of Algebra. Here, we begin with a base case that all degree $n$ real polynomials have a complex root (actually a real root), when $n$ is odd. This is true by the intermediate value theorem. Then we prove the inductive step $\binom n2 \implies n$. (This works because when $n$ is even, $\binom n2$ is always divisible by one fewer power of $2$.)

The inductive step is as follows: given a real polynomial $f$ of degree $n$, we pass to a splitting field where it has roots $r_1, \dots, r_n$, and for every $t$ define the polynomial $$g_t(x) = \prod_{i<j} (x - r_i - r_j - t r_i r_j).$$ This is a polynomial of degree $\binom n2$ and since its coefficients are symmetric polynomials in $r_1, \dots, r_n$, they are actually real. So the inductive hypothesis applies to $g_t$, and one of the roots $r_i + r_j + t r_i r_j$ is a complex number. Take many values of $t$, and eventually a pair $(i,j)$ will repeat; we conclude that $r_i + r_j$ and $r_i r_j$ are complex, and therefore $r_i$ and $r_j$ are both complex.

3
On

This problem has the easiest strong induction proof I have ever seen:

(Hong Kong Advanced Level Examination 1995)

Let $\{a_n\}$ be a sequence of non-negative integers such that

$$n\le\sum_{k=1}^na_k^2\le n+1+(-1)^n$$

for $n=1,2,3,\dots$.

Prove that $a_n=1$ for $n\ge 1$.

As $1\le a_1^2\le1$, $a_1=1$.

If $a_1=a_2=a_3=\cdots=a_m=1$, then

$\displaystyle m+1\le\sum_{k=1}^m(1)+a_{m+1}^2\le m+2+(-1)^m$ $\implies$ $1\le a_{m+1}^2\le 2+(-1)^m$ $\implies$ $a_{m+1}=1$

0
On

Every positive rational less than $1$ can be written as a sum of distinct unit fractions.

Sketch of proof. Given $a/b$, divide $b$ by $a$ to get $$b=aq+r\quad\hbox{with}\quad 0\le r<a\ .$$ If $r=0$ then $a/b=1/q$ and we are done. Otherwise $$\frac ab=\frac1{q+1}+\frac{a-r}{b(q+1)}$$ and we do induction on the numerator. Since $0<a-r<a$ we can write $$\frac{a-r}{b(q+1)}=\frac1{q_1}+\frac1{q_2}+\cdots\ ,$$ and it is not hard to check that $q_k>q+1$ for all $k$. The base case is $a=1$ which is trivial.

Comment. The result is also true for rationals greater than $1$ but this requires a bit more messing around.

2
On

The pigeonhole principle.

Given two natural numbers $n>m>0$, there are no injective functions from a set with $n$ elements to a set with $m$ elements.

Or, sometimes interchangeably,

A subset of a finite set is finite.

If $A=\{a_0,\ldots,a_{n-1}\}$, where $n$ is a natural number, then for all $B\subseteq A$, there is some $m\leq n$ such that $B$ can be written as $\{b_0,\ldots,b_{m-1}\}$.


These are often chucked as "an obvious observation", which they are indeed, but a formal proof requires induction. Back when I was a masters student, I was TA'ing the basic course in logic and set theory, and my advisor who was one of the professors in the course explained to me once, that the importance of these theorems being thoroughly presented in the course, is also to educate the students: everything must be proved, no matter how trivial looking.

0
On

See this post describing some example applications of induction, which include:

  1. Proof of Euclidean algorithm by structural induction. I would say that students must be taught structural induction and why it can be obtained from normal induction.

  2. Problems that can be easily solved via the extremal principle, which is equivalent to induction but sometimes much easier to see/use.

  3. Proof of AM-GM by a smoothing procedure. Many optimization theorems can be proven by a suitable smoothing/greedy procedure, and the proof will technically require induction.

1
On

The connection between the number of nodes $n$ and number of leaves $\ell$ in a binary tree is given by $n = 2\ell - 1$.

$\texttt{<insert a nice picture of a tree>}$

0
On

Unusual base case example. One where the base case isn't 0 or 1 is nice to have too. E.g. showing that every n-gon has $180*(n-2)$ interior degrees. You bases case is a triangle, so $n=3$ is where you start, and it is a good reminder to students to think even about the base case.

0
On

Addition of natural numbers is commutative. This requires that you have a successor operation $S$, and define addition of natural numbers as follows:

  • First, for any natural number $n$ we define a function $\operatorname{Add}_n$ by the recursive definition $\operatorname{Add}_n(0) = n$, $\operatorname{Add}_n(Sm) = S(\operatorname{Add}_n(m))$.
  • Next, we for any two natural numbers $m, n$ we define a binary operation $\operatorname{Add}(m, n) = \operatorname{Add}_m(n)$.

At first glance it's not at all obvious that $\operatorname{Add}_m(n)=\operatorname{Add}_n(m)$. The proof is a double induction on both variables, making it an especially rich example.

Once this one is done, the associative problem can be done next -- that one is triple induction on three variables, so it's easy to get lost! And of course multiplication can be handled similarly.