Is "$\frac10=\frac10$" true or false?

298 Views Asked by At

Please read the full question (and the linked answer) before responding.


This started out as much longer question about extending complete theories by partial functions. That question was effectively answered here. In their answer, Noah Schweber presents an alternative logic that allows for the introduction of partial functions while preserving decidability.

Using this logic, "$\frac10\ne\frac10$" is provable in the theory of real closed fields (hence, provable in any theory able to express the algebra of the real numbers with division - up to elementary equivalence.)

However, this result cannot be replicated in strict first-(or second)-order logic, which means that somewhere out there is a mathematician who will disagree despite using division in there daily life. So to that mathematician I ask:

Is the statement $\frac10=\frac10$ true or false? Explain.

Then, depending on the answer, I will hopefully be able to reverse-engineer "the standard" method of extending a theory by partial functions implied by the existence and frequent use of the symbols $\sqrt,\div,\det,\lim,$ etc.

Alternatively, choose up to three:

  1. Mathematics takes place within the context of first-(or second)-order logic ("formalizable in ZFC" means "within first-order logic"), and the distinction between "syntax" and "semantics" is meaningful.

  2. "$\forall x(x=x)$" is a theorem of first-(and second)-order logic.

  3. The theory of real closed fields is complete, and "$\frac11=1$" is provable in the theory of real closed fields.

  4. The correctness of a mathematical proof is not a matter of opinion, and $\frac10\ne\frac10$

Additional Notes

Many of the comments responding to this question seem to boil down to to "$\frac10$ does not make sense, therefore '$\frac10=\frac10$' is meaningless." This completely misses the point. Using standard definitions, "1" is a constant symbol, "0" is a constant symbol, and "/" is a binary (partial) function symbol. If we take partial functions to be first-orderizable (which they must be to keep real closed fields [with division] inside of first-order logic), then "$\frac10$" must be a term - any parser for first-order logic will recognize $\frac10$ as a term. To have otherwise demands a context-sensitive language, but the language of first-order logic is context free. You cannot have "is context free" and "is not context free" simultaneously, so either we are not working in first-order logic when we introduce "$/$", or $\frac10$ is a term. If $\frac10$ is a term, then "$\frac10=\frac10$ is well-formed - true, even, by instantiation on the logical axiom $\forall x(x=x)$. All of this, I believe, is encapsulated quite succinctly in my "choose three."

Also should I tag this with something else? There's a lot going on here, but I'm not sure if this is properly within the scope of any particular field; like, would this still fall under "reverse mathematics," or maybe "formal proofs?"

3

There are 3 best solutions below

3
On BEST ANSWER

I'll try to answer your question with an extended analogy.

Here is a simplified picture of how programming computers works. At a basic level, computers run machine code: a simple set of instructions that say things like "add the bits in these two positions in memory" or "move this bit from this position to this other position". It's extremely laborious to write programs in machine code, so people develop higher-level programming languages with more complicated instructions, data types, etc. When I write a program in a higher-level language, another program called a compiler takes my program as input, reads it, and translates it into machine code, which can be run on my computer.

In mathematics, first-order ZFC is like machine code (or maybe assembly code? but that's splitting hairs). Essentially no one actually writes proofs or definitions in the first-order language of set theory. Instead, they communicate using the whole apparatus of ordinary / natural language mathematics, which is like a higher-level programming language. The role of first-order ZFC as a foundation for mathematics is that concepts and proofs in ordinary mathematics can in principle be translated (compiled) into formal proofs from ZFC.

Asides: (1) Different computers with different hardware have different machine languages. But that's no problem (and this is one of the advantages of having a higher-level programming language). The same program can be run on different computers, as long as it is compiles to appropriate machine code for each computer it might run on. The same is true in mathematics: we can swap out first-order ZFC for a different foundation system, e.g. another set theory or homotopy type theory or whatever, and at the expense of picking a new "compiler", we will still be able to translate our higher-level proofs to this new foundation - as long as they didn't use any features not present in the new foundation (e.g. the axiom of choice is not implemented in ZF). (2) We can also swap out the higher-level language. If my use of the phrase "natural language mathematics" bothers you, feel free to instead think about some formal language used by an automated proof assistant like Coq or Agda or Lean. These languages are more explicitly like higher-level programming languages that can be compiled to formal proofs. A growing number of people think that mathematicians should "program" in these languages instead of (or rather, in addition to) natural language.

Ok, now one feature of higher-level programming languages and their compilers is that they catch bugs automatically: they detect syntax errors and type errors when they read your code, before trying to compile it. For example, if you try to write something like $5+\texttt{"banana"}$, most programming languages will complain that it doesn't make sense to add a number to a string. Such code will never make it to the stage of running on a computer, because it won't get compiled: the compiler doesn't know what to do with it. Of course, the same programming language can have different compilers, and some compiler might make a weird choice like treating every string as $0$ when you add it to a number, so that $5+\texttt{"banana"} = 0$. But in general, it's helpful if your compiler has strong type checking and error checking capabilities so that bugs are caught before buggy code gets run on the computer.

Similarly, there are lots of ways to decide on a translation procedure from higher-level mathematics to first-order ZFC, and it's helpful if our translation procedure refuses to translate statements that are obviously buggy (meaningless). Some simple translation procedures may happily accept the "sentence" $\frac{1}{0} = \frac{1}{0}$ and translate it into a sentence of ZFC that can be proved or disproved - but since this sentence is actually meaningless, whether it is provable or disprovable will depend heavily on the implementation of the translation procedure! But it's more reasonable to expect the notation $\frac{a}{b}$ to require $a$ to be a number and $b$ to be a non-zero number. If $b = 0$, we should get a "type error". That is, our translation procedure should be smart enough that before it translates a statement involving the notation $\frac{a}{b}$, we have to give it proofs that $a$ is a number and $b$ is a non-zero number (and this is something mathematicians are used to doing - when I write $\frac{a}{b}$ on the board in the course of a proof or calculation in class, I always justify to my students why $b\neq 0$, if it's not obvious). In this case, it doesn't make sense to ask about the truth value of $\frac{1}{0} = \frac{1}{0}$ because it is rejected out of hand by our "mathematics compiler".

Let me now respond to your points 1-4.

  1. Mathematics takes place within the context of first-(or second)-order logic ("formalizable in ZFC" means "within first-order logic"), and the distinction between "syntax" and "semantics" is meaningful.

I believe that mathematics is not solely first-order ZFC. Rather, it's a complex, strongly typed, natural language system (that includes things like partial functions) but can, in principle, be "compiled" to first-order ZFC. This is part of my personal philosophy of mathematics.

But the discussion above is consistent with the worldview that mathematics is solely first-order ZFC. Under such a view, we should call all the higher-level mathematics that people actually do "pre-mathematics". It is only "mathematics" once it is translated to ZFC. This view does not require that the translation procedure is total (defined on all possible input). It's perfectly consistent to view the sentence $\frac{1}{0} = \frac{1}{0}$ as "pre-mathematics" that has a type error and hence cannot be turned into "mathematics".

  1. "$\forall x(x=x)$" is a theorem of first-(and second)-order logic.

Clearly true.

  1. The theory of real closed fields is complete, and "$\frac11=1$" is provable in the theory of real closed fields.

I'm not sure of the relevance of this point. The first-order theory of real closed fields is indeed complete. $\frac{1}{1} = 1$ is not provable in the theory of real closed fields, because the syntax $\frac{1}{1}$ is not part of the first-order language of real closed fields! But we would usually take this as shorthand for $1 = 1\cdot 1$, which is provable.

  1. The correctness of a mathematical proof is not a matter of opinion, and $\frac10\ne\frac10$

In the ideal world, the correctness of a mathematical proof should not be a matter of opinion. In the real world, of course, reasonable people do disagree all the time about the correctness of proofs! But yes, I believe that in such a situation, either the two parties disagree about the standards for proof, or at least one of them is wrong.

$\frac{1}{0}\neq\frac{1}{0}$ is meaningless and has no truth value.

0
On

The usual definition of division on $\mathbb{R}$ is simply not applicable for divisors of $0$. The numerical value of such constructs are thus said to be undefined (not to be confused with false).

For all $x,y,z \in \mathbb{R}$, if $y\neq 0$ then $x\div 0 = z$ if and only if $x=yz$

Examples

  1. Regardless of the value of $x$ and $z$, if $y=0$, we cannot apply this definition to determine whether or not $x\div 0=z$ is true. We can say the $x\div 0$ is undefined.

  2. If $x=0$ and $y=z=1$, then we can apply this definition to say that $0\div 1 = 1 $ is false. We cannot say from this that $0\div 1$ is undefined.

1
On

If you want to axiomatize real analysis in standard FOL, and be able to use the division symbol, you have no choice but to accept that the following is a true sentence:

$∀x{∈}ℝ\ ( \ x ≠ 0 ⇒ 1/(1/x) = x \ )$.

I'm sure you agree with that. Notice that when you read the sentence it make sense and (seems to) only involve division by non-zero reals. But the sentence is trivially equivalent to:

$∀x{∈}ℝ\ ( \ 1/(1/x) ≠ x ⇒ x = 0 \ )$.

Do you like this? No? Then you cannot stay within standard FOL. However, that doesn't mean that you should just give up standard FOL completely. In fact, you would have great trouble with all of mathematics if you refuse to accept the standard approach taken by most logicians in formalizing mathematics in FOL. Consider that you do not only want to talk about reals. How about:

$∀k{∈}ℕ\ ∃x{∈}ℝ\ ( \ 0 < x·(n+1) < 1 \ )$.

There is no division here, but there is actually the same issue hiding in the restricted quantifiers. Do you want "$∀x{∈}S\ ( \ Q(x) \ )$" to be equivalent to "$∀x\ ( \ x{∈}S ⇒ Q(x) \ )$" or not? This is standard, but if you say "yes" then you are forced to accept that the above sentence is equivalent to:

$∀k\ ∃x\ ( \ ( \ 0 < x·(n+1) < 1 ∨ k{∉}ℕ \ ) ∧ x{∈}ℝ \ )$.

There are two main approaches you can take to 'resolve' this. The first approach is to still stick to standard FOL and simply view each predicate/function-symbol as producing arbitrary output on objects outside of its intended domain. This is completely compatible with many-sorted FOL and on-the-fly definitorial expansion (see this post for the details), both of which are expedient (in some form or another) for any practically usable foundational system. Yes, you will be able to prove all the above sentences, but there is nothing surprising anymore once you understand that it is the nature of having simple syntax rules.

The second approach is to restrict the well-formed terms and formulae in some manner, so that the second and fourth sentences above are invalid. All variants of this approach are complicated to the point that I will not give any details here. Note that mere syntactic restrictions are not enough. For instance, you generally want to be able to define a new predicate/function-symbol $F$ on the fly with intended domain being some set $S$, and you want to be able to write "$F(x)$" in any context where you have proven "$x{∈}S$". So such an approach is best carried out in a Fitch-style system where you can have relatively clean context-based syntax rules. $ \def\cimp{\mathbin{?{⇒}}} \def\cand{\mathbin{?∧}} $

In the second approach, if you do not want to mangle the standard meaning of "$⇒$", and want to be able to use PL tautologies in deduction, then you can instead add guarded implication/conjunction, which I shall denote by "$\cimp$" and "$\cand$" respectively. (These are related to short-circuit evaluation.) Then the first sentence must be written as:

$∀x{∈}ℝ\ ( \ x ≠ 0 \cimp 1/(1/x) = x \ )$.

The second and fourth sentence are simply invalid. Of course, you need to have suitable deductive rules to govern the guarded operations. And in such a system "$∀x{∈}S\ ( \ Q(x) \ )$" would be equivalent to "$∀x{∈}obj\ ( \ x{∈}S \cimp Q(x) \ )$".