"It looks straightforward, but actually it isn't"

7.2k Views Asked by At

In a previous topic, I asked about proof of statements which are simple but incorrect.

Here, I ask about statements which seems, at a first glance, straightforward, but if we try to write a proof, we can see it's much harder than it looked. So I expect the answers to contain:

  1. the statement;
  2. why it looks easy to prove;
  3. why actually it isn't.
8

There are 8 best solutions below

6
On

An example is Prohorov theorem. I recall the context.

We have a metric space $(X,d)$ endowed with its Borel $\sigma$-algebra, and a collection of probability measures, say $\mathcal M$. We say that such a collection is tight if for each $\varepsilon>0$, one can find a compact $K$ such that for all $\mu\in\mathcal M$, we have $\mu(K)>1-\varepsilon$.

  1. Prokhorov theorem states that for each tight sequence, one can find a subsequence which converges in law (that is, $\limsup_k\mu_{n_k}(F)\leqslant\mu(F)$ for all closed set $F$.

  2. At a first glance, it's seems just a corollary of Riesz theorem, because we can characterize linear functional on the space of continuous functions.

  3. But it's not so easy. For example, we have to reduce to the case where $X$ is a countable union of compact sets, and check consistency property (actually, Kolmogorov extension theorem is used). Billingsley's book Convergence of probability measures gives a complete proof and Koshnevisan's Multiparameters processes asks us to fill the details in an exercise.

0
On

This is not exactly an answer to your question, but I think it is somewhat close.

  1. "If all the eigenvalues of a $0-1$ matrix $A$ are positive, they are all equal to $1$."
  2. This statement is easy to prove: simply invoke A.M.$\ge$G.M. for the trace and determinant of $A$.
  3. However, as far as I know, this is the only known proof. Without using A.M.$\ge$G.M., the problem becomes surprisingly difficult. I think the reason is that, other than the largest (in modulus) eigenvalue, we know very little about the smaller eigenvalues of a (entrywise) nonnegative matrix, not to say a $0-1$ matrix.
7
On

Continuing my comment, Jordan's Curve Theorem is, perhaps, one of the most well-known, easy-to-grasp, and very hard to prove theorems. We could write it as:

For any closed non self-intersecting smooth curve (i.e., a continuous and injective map from the circle $\,S^1\,$ to the real plane), its complement in the plane has exactly two connectedness components: one bounded and the other one unbounded.

Why does it look easy? Because most curves we can think of fulfilling the above conditions "trivially" fulfill the claim.

Why isn't its proof easy? Because, in the general case, it requires advanced stuff like homotopy groups, Hopf maps, covering maps, lifting properties for maps, etc. (I'm just talking of some aspects of the proof in the above mentioned book. There might be, and almost sure there are, other proofs).

2
On

Below I present a proof which is probably an overkill, however whenever I tried to simplify things I ended up finding it necessary to use some relatively deep stuff.

If anyone can provide a more elementary proof, I'll either delete this answer or add the proof to the answer, depending on how easy it is.

First a definition which I didn't explicitly use, but it's useful for whoever tries to give a more elementary proof.

Definition: A set is said to be finite if it is equinumerous to (with?) $[k\textbf{]}\color{grey}{(=\{x\in \omega :x<k\})}$, for some $k\in \omega$.

Statement: Let $A,B$ be finite sets. If $|B|\leq |A| \land A\subseteq B$, then $A=B$.

Proof: Suppose that $|B|\leq |A| \land A\subseteq B$ and $A\neq B$. Then $A\subset B$ and since they are finite $|A|<|B|$, (this uses the fact that a set is finite if, and only if, it isn't Dedekind infinite which is something way too deep considering out strongly the statement is assessed as intuitively true). And this is a contradiction due to the trichotomy of cardinal numbers (which isn't skin deep either).


The following proof was suggested (and typed) by T. Verron. Even though it is elementary it still serves my purpose: the gap between how easy it is to believe the statement and the proof is huge.

A possible elementary proof: Let $m = |A|$ and $n= |B|$. Let $f$ be a bijection from $A$ to $[m]$, and let $g$ be a bijection from $B$ to $[n]$. Define a new map $h : B \to [n]$, whose restriction to $A$ is $f$. For example, let $\sigma$ be a permutation of $B$, such that $\sigma(g^{-1}[m]) = f^{-1}([m])=A$, and define $h$ by

$h(x) = \begin{cases}f(x) & \text{if}\;\; x \in A \\ g \circ \sigma^{-1}(x) & \text{else}\end{cases}$

Then $h$ is a bijection from $B$ to $[n]$, and $h \circ f^{-1}$ is an injection from $[m]$ to $[n]$. That shows that $m \leq n$, and thus by assumption, that $m = n$. Now $f^{-1} \circ h$ is a bijection from $A$ to $B$, and by construction is the identity.

5
On

I remember a lecture in measure theory, when we studied monotone convergence theorem.

One of my colleagues suggested to approximate the involved functions by simple ones, hence the result reduces to show it for such functions.

We can approximate in $L^1$ an non-decreasing sequence of integrable functions by a non-decreasing sequence of simple functions. But proving the result for simple functions is not so easy because the number of needed characteristic may vary with $n$. In particular, it seems it's not an immediate consequence of the fact that $\mu(A_n)\to \mu(A)$ when $A_n\uparrow A$.

0
On

In this thread, it is asked to show that given a continuous function on $[0,+\infty)$ for which $\lim_{x\to +\infty}f(x)=0$, there is a sequence of polynomials $(P_n,n\geqslant 1)$ such that $\sup_{x\geqslant 0}|f(x)-e^{-x}P_n(x)|<n^{-1}$.

The result seems to be a straightforward application of Stone-Weierstrass theorem (we could think that we have to reduce to a bounded interval using $f(x)\to 0$ at infinity), but the argument is circular.

We have to transform $[0,+\infty]$ into a compact interval by a continuous bijection.

3
On

Property: Let $f : (0,+ \infty) \to \mathbb{R}$ be a continuous function. If $f(nx) \underset{n\to + \infty}{\longrightarrow} + \infty$ for all $x>0$, then $\lim\limits_{x \to + \infty} f(x)=+ \infty$.

Although the property is visual, the only proof I know uses Baire category theorem, a rather abstract viewpoint.

3
On

Theorem: 1 + 1 = 2

Proof:

Many pages of...

Whitehead, Alfred North, and Bertrand Russell (1910, 1912, 1913).
Principia Mathematica, 3 vols, Cambridge: Cambridge University Press.
Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3).

Note that "The above proposition is occasionally useful."