What portion of mathematical statements are ZFC undecidable?

311 Views Asked by At

From Godel's Incompleteness Theorem we conclude that for any logically consistent set of a finite number of axioms, there are an infinite number of mathematical statements that are true but can't be proven to be true. What portion of mathematical statements are provably true or provably false?

I think this could be formalized as follows:

Let $C(x)$ be the complexity of a mathematical statement $x$, and let $N(\epsilon)$ be the number of mathematical statements $x$ such that $C(x)\leq\epsilon$. The way we define $C(x)$ is arbitrary as long as it has the following properties:

  • $0 \leq C(x)$ for all possible mathematical statements $x$
  • $C(x)$ is finite for all possible mathematical statements $x$
  • $N(\epsilon)$ is finite for all finite values of $\epsilon$

Now let $P(\epsilon)$ be the number of mathematical statements $x$ such that $C(x)\leq \epsilon$ and $x$ is either provably true or provably false in ZFC. Because $P(\epsilon)\leq N(\epsilon)$, $P(\epsilon)$ is also finite for all finite values of $\epsilon$.

What is $$\lim_{x \to \infty} \frac{P(x)}{N(x)}$$

For the purposes of this question, let us assume that ZFC is consistent, and that we're only dealing with mathematical statements that can be encoded in a finite number of bits of information. Based on that we could define $C(x)$ as the number of bits needed to encode a the mathematical statement $x$.

2

There are 2 best solutions below

0
On BEST ANSWER

We can define a Chaitin's constant for the theory, representing a provability probability rather than a halting probability.

The setup is we first give it a prefix-free, binary encoding. Binary means there is a string of bits representing each symbol, for example $\emptyset \rightarrow 0000, \in \rightarrow 0001, \forall \rightarrow 0010, \dots$ and prefix-free means no prefix of a formula is itself a formula, one way of achieving this is to write all formulas in prefix order, that is $\neg \exists x \text{:} \in x \emptyset$ instead of $\neg \exists x \text{:} x \in \emptyset$. With some care this can be coded so that the binary representation of almost every real in $[0,1)$ has exactly one prefix encoding a valid formula. Then define:

$\Omega_{ZFC} = \sum_f{2^{-\vert f \vert}}$

where the sum is over all provable formulas $f$. This corresponds to the probability that, if we read a formula from a random bit string, it will be a provable one. We can define a variant of $\Omega$ for decidable formulas by summing over the formulas $f$ for which either $f$ or $\neg f$ is provable.

$\Omega_{ZFC} \lt 1$ is equivalent to $\text{Con}(ZFC)$, so in ZFC we won't be able to prove any better upper bound on its value (although every lower bound is provable). But it is possible to find upper bounds on the $\Omega$ of theories that ZFC can prove consistent such as Peano Arithmetic, defined similarly. Like Chaitin's constant, $\Omega_{PA}$ is algorithmically random (although PA cannot prove this) and so is $\Omega_{ZFC}$ if ZFC is consistent.

If you take any sort of straightforward approach to defining an encoding as I outlined above, the probability that a random formula is decidable is going to be very close to $1$. That's because the shortest formulas that have a chance of being undecidable are somewhat longer than the shortest formulas that can be expressed, and the weight of each formula is exponential in its length. Very short formulas are the most likely and these will all be decidable. That can be changed by extending the theory to have undecidable formulas with short codes (for example if we gave it a new relation symbol and axiomatized it as a provability predicate).

1
On

I feel this question may be a duplicate, because I am pretty certain I first saw the paper I mention below in an answer here. You may be interested in reading the following:

MR2141502 (2006c:68092) Reviewed. Calude, Cristian S.(NZ-AUCK-C); Jürgensen, Helmut(3-WON-C). Is complexity a source of incompleteness? (English summary), Adv. in Appl. Math. 35 (2005), no. 1, 1–15.

The paper is a nice read, not too long or too technical. It uses the notion of Chaitin complexity that the other answer discusses as well. Let me quote the abstract:

In this paper we prove Chaitin's "heuristic principle," the theorems of a finitely-specified theory cannot be significantly more complex than the theory itself, for an appropriate measure of complexity. We show that the measure is invariant under the change of the Gödel numbering. For this measure, the theorems of a finitely-specified, sound, consistent theory strong enough to formalize arithmetic which is arithmetically sound (like Zermelo–Fraenkel set theory with choice or Peano Arithmetic) have bounded complexity, hence every sentence of the theory which is significantly more complex than the theory is unprovable. Previous results showing that incompleteness is not accidental, but ubiquitous are here reinforced in probabilistic terms: the probability that a true sentence of length $n$ is provable in the theory tends to zero when $n$ tends to infinity, while the probability that a sentence of length $n$ is true is strictly positive.