$2=1$ Paradoxes repository

1k Views Asked by At

I really like to use paradoxes in my math classes, in order to awaken the interest of my students. Concretely, these last weeks I am proposing paradoxes that achieve the conclusion that 2=1. After one week, I explain the solution in the blackboard and I propose a new one. For example, I posted the following one some months ago: What is wrong with the sum of these two series? I would like to increase my repertoire of fake-proofs. I would be glad to read your proposals and discuss them! My students are 18 years old, so don't be too cruel :) Here is my own contribution:

\begin{equation} y(x) = \tan x \end{equation} \begin{equation} y^{\prime} = \frac{1}{\cos^{2} x} \end{equation} \begin{equation} y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x} \end{equation} This can be rewritten as: \begin{equation} y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x} = \frac{2 \sin x}{\cos x \cdot \cos^{2} x} = 2 \tan x \cdot \frac{1}{\cos^{2} x} = 2yy^{\prime} = \left( y^{2} \right)^{\prime} \end{equation} Integrating both sides of the equation $y^{\prime \prime} = \left( y^{2} \right)^{\prime}$: \begin{equation} y^{\prime} = y^{2} \end{equation} And therefore \begin{equation} \frac{1}{\cos^{2} x} = \tan^{2} x \end{equation} Now, evalueting this equation at $x = \pi / 4$ \begin{equation} \frac{1}{(\sqrt{2}/2)^{2}} = 1^{2} \end{equation} \begin{equation} 2 = 1 \end{equation}

15

There are 15 best solutions below

2
On

One of my favorites, and very simple to understand for most algebra students:

$2 = 1+1$

$2 = 1+\sqrt{1}$

$2 = 1+\sqrt{(-1)(-1)}$

$2 =^* 1+\sqrt{-1}\sqrt{-1}$

$2 = 1+i*i$

$2 = 1+i^2$

$2 = 1+(-1)$

$2 = 0$

$^*$ The wrong step

Divide both sides by 2 and add 1 and you would get $2=1$, as you desired.


To be thorough, the mistake occurs in the fourth line where the square root is split. In reality, the rule is:

$\sqrt{ab}=\sqrt{a}\sqrt{b}$ when either $a\geq0$ or $b\geq0$

$\sqrt{ab}=-\sqrt{a}\sqrt{b}$ when $a<0$ and $b<0$

So you would have an equality if you follow that rule, but many students aren't going to catch the error.

2
On

Does this count or is it too obvious where things go wrong, considering there is only one step?

Define $f_n(x) = n \cdot 1_{x \le \frac 1n}$

Clearly for every $x$, $$\lim_{n \to \infty} f_n(x) = 0$$

Therefore

$$\lim_{n \to \infty} \int_0^1 f_n(x)\ dx = \int_0^1 0 \ dx = 0$$

But $\int_0^1 f_n(x) \ dx = 1$ for every $n$; hence it is proved that

$$1 = \lim_{n \to \infty} 1 = 0$$

1
On

$$0 = (1-1)+(1-1)+ … = 1 -(1-1)-(1-1)-… = 1 \implies 2=1$$

0
On

$(1 - x)(1 + x + x^2 + ..... )=$

$(1 + x + x^2 .... )(-x - x^2 - x^3 -......) = 1 + (x -x) + (x^2 - x^2)... = 1$ so

$1 + x + x^2 + .... = \frac{1}{1 - x}$

Let x = -1.

$ 1 - 1 + 1 - 1 + 1 - 1 .... = \frac{1}{1 -(-1)} = \frac 12$

but clearly $1 - 1 + 1 - 1 +... = (1-1) + (1-1) +... = 0$.

So $0 = \frac 12$ (and also 1, and -1).

2
On

Another enjoyable "paradox": We first denote $$S:=\sum_{n\in\mathbb N}\dfrac{(-1)^{n+1}}{n}$$ The fact that $0\neq S\in\mathbb R$ can be established using elementary tools.
We then write:
$S=\frac{1}{1}-\frac {1}{2}+\frac{1}{3}-...+...-...$
$2S = 2(\frac{1}{1}-\frac {1}{2}+\frac{1}{3}-...+...-...)=\frac{2}{1}-\frac {2}{2}+\frac{2}{3}-\frac{2}{4}+\frac{2}{5}-\frac{2}{6}+\frac{2}{7}-...=$
$=\color{red}{\frac{2}{1}}\color{red}{-\frac {2}{2}}\color{green}{+\frac{2}{3}}\color{blue}{-\frac{2}{4}}+\frac{2}{5}\color{green}{-\frac{2}{6}}+\frac{2}{7}-...=\color{red}{\frac{1}{1}}\color{blue}{-\frac{1}{2}}\color{green}{+\frac{1}{3}}-...=S$
And at last: $$2S = S \Longrightarrow 2=1$$

0
On

Here is a simple one:

$$ x=y\\ x^2=xy\\ 2x^2=x^2+xy\\ 2x^2-2xy=x^2-xy\\ 2(x^2-xy)=1(x^2-xy)\\ 2=1 $$

The error is quite obviously division by zero (from the 5th to 6th step).

2
On

Here's one I just made up. $\log_{b} b^x = x$. And $\log_{b} 1 = 0$.

Let $b = 1, x = 1$, and $b^x = 1$. Then $0 = \log_b 1 = \log_b b^x = x = 1$

4
On

Why not show all numbers are equal to 1:

For any $z\in\mathbb R$, $$ \sum_{n=-\infty}^{\infty}z^{n}=z\sum_{n=-\infty}^{\infty}z^{n-1}=z\sum_{n=-\infty}^{\infty}z^{n}. $$ So $$ \sum_{n=-\infty}^{\infty}z^{n}=z\sum_{n=-\infty}^{\infty}z^{n}\Rightarrow 1=z. $$

0
On

In the same vein as your example, let's integrate $\frac1x$ by parts.

Let $I = \int\frac1x\ \textrm dx$, and set $u = \frac1x, \textrm dv = \textrm dx$. Then:

$$ \begin{align} I = \int u\ \textrm dv &= uv - \int v\ \textrm du \\ &= \frac1x\cdot x - \int x\left(\frac{-1}{x^2}\right) \textrm dx \\ &= 1 + \int\frac1x\ \textrm dx \\ &= 1 + I \end{align} $$

Therefore $0 = 1$, so clearly $1 = 2$.

0
On

Here is one of my favorites.

Proof that $1=0$

Let's consider for real $x$ the function $f(x)=xe^{-x^2}$. Note, the following integral representation of $f$ is valid (substitute: $u=x^2/y$). \begin{align*} \int_{0}^{1}\frac{x^3}{y^2}e^{-x^2/y}\,dy =\left[xe^{-x^2/y}\right]_0^1 =xe^{-x^2} \end{align*}

We obtain for all $x$ the following relationship

\begin{align*} e^{-x^2}(1-2x^2)&=\frac{d}{dx}\left(xe^{-x^2}\right)\\ &=\frac{d}{dx}\int_0^1\frac{x^3}{y^2}e^{-x^2/y}\,dy\\ &=\int_0^1\frac{\partial}{\partial x}\left(\frac{x^3}{y^2}e^{-x^2/y}\right)\,dy\\ &=\int_0^1e^{-x^2/y}\left(\frac{3x^2}{y^2}-\frac{2x^4}{y^3}\right)\,dy \end{align*}

and observe by setting $x=0$ the left-hand side is one while the right-hand side is zero. \begin{align*} \text{LHS: }\qquad e^0(1-0)&=1\\ \text{RHS: }\qquad \int_0^1 0\,dy&=0 \end{align*}

Note: This example can be found in Counterexamples in Analysis by B.R. Gelbaum and J.H.M. Holmsted.

1
On

Let $x=(x_{ij})$ be the infinite matrix (where omitted entries are $0$), $$x = \begin{pmatrix}1 \\ -1 & 1 \\ &-1 & 1 \\ &&-1 & 1\\ &&&\ddots& \end{pmatrix}$$ i.e. $x_{ij} = \Bbb 1_{i=j} - \Bbb 1_{i=j+1}$. Here $i$ is the row, $j$ is the column. Then $$∑_{ij} x_{ij}=∑_i\left(∑_jx_{ij}\right) = ∑_i0 = 0$$ While also $$∑_{ij} x_{ij}=∑_j\left(∑_ix_{ij}\right) = ∑_j\Bbb 1_{j=1} = 1$$

So $1=2$.


For the interested, this is a violation of Fubini.

1
On

$x = \underbrace{1 + 1 + 1 + \ldots + 1}_{x \textrm{ times}} = \underbrace{\frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \ldots + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right)}_{x \textrm{ times}} = \frac{\mathrm{d}}{\mathrm{d}x}\underbrace{\left(x + x + x + \ldots + x\right)}_{x \textrm{ times}} = \frac{\mathrm{d}}{\mathrm{d}x}\left(x^2\right) = 2x$

1
On

Although it's not quite what you're looking for, the Banach-Tarski paradox shows that, in a certain sense, $1$ does equal $2$:

Given a solid ball in $\mathbb R^3$, there is a way to decompose the ball into $5$ disjoint sets, move them by rigid motions, and obtain two solid balls of the same radius.

The catch is that these are non-measurable sets (and, of course, you need the Axiom of Choice).

0
On

Let $U_n$ be a probability measure on $[0,1]$ such that when restricted to $\{0,\frac{1}{n},…,\frac{n-1}{n},1\}$, is the uniform measure on that set. i.e. $$ U_n\left( A \right) := \left| \left\{ k∈{0,…,n} : \frac{k}{n} ∈ A\right\}\right|$$

Of course, as you send $n→∞$, $U_n$ tends to the (continuous) uniform measure on $[0,1], U_{[0,1]}$, $$U_n→ U_{[0,1]}$$ Note that if $Q=\Bbb Q∩ [0,1]$, $Q$ is measurable and $U_{[0,1]}$-null, so $$1 = U_n(Q) → U_{[0,1]}(Q) = 0$$ Hence $2=1$.

0
On

I always liked the one that "proves" $1+2+3+4+...=1/12$


https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF