In a previous topic, I asked about proof of statements which are simple but incorrect.
Here, I ask about statements which seems, at a first glance, straightforward, but if we try to write a proof, we can see it's much harder than it looked. So I expect the answers to contain:
- the statement;
- why it looks easy to prove;
- why actually it isn't.
An example is Prohorov theorem. I recall the context.
We have a metric space $(X,d)$ endowed with its Borel $\sigma$-algebra, and a collection of probability measures, say $\mathcal M$. We say that such a collection is tight if for each $\varepsilon>0$, one can find a compact $K$ such that for all $\mu\in\mathcal M$, we have $\mu(K)>1-\varepsilon$.
Prokhorov theorem states that for each tight sequence, one can find a subsequence which converges in law (that is, $\limsup_k\mu_{n_k}(F)\leqslant\mu(F)$ for all closed set $F$.
At a first glance, it's seems just a corollary of Riesz theorem, because we can characterize linear functional on the space of continuous functions.
But it's not so easy. For example, we have to reduce to the case where $X$ is a countable union of compact sets, and check consistency property (actually, Kolmogorov extension theorem is used). Billingsley's book Convergence of probability measures gives a complete proof and Koshnevisan's Multiparameters processes asks us to fill the details in an exercise.