What properties of a proposed new logic + inference system + set-theory must be checked to make it viable?

173 Views Asked by At

Suppose I would like to introduce a new trifecta of logic + inference system + set theory. What are the minimum formal properties of these systems that I must verify to demonstrate the viability of the new framework?

The one property that comes to mind is consistency: that it is not possible to deduce a contradiction in this system. Actually, I can probably get away with proving only relative consistency, under the assumption that ZFC or some other standard set theory is consistent.

Another property that comes to mind is that ZFC can be developed in the new system. However, if the the new logic + inference system are extensions of first order logic + Gentzen's natural deduction, in the sense that the new system only adds features, but doesn't detract any existing feature, from these systems, then I'm good to go, right?

Are there any other properties I must verify, or is consistency sufficient to merit publication in a peer-reviewed journal, or, more modestly, start using this framework in my daily work with confidence?

2

There are 2 best solutions below

6
On

First, the easiest case: if you are creating a logic+axioma that induces the same theory as some well researched well known logic. For example, creating a new first order set theory with the same grammar and symbols and intended meaning to the symbols as normal natural deduction + zfc (lets call that Common).

In this case, all you have to do is prove 2 things, first (1) soundness. This can be done by proving that every theorem in your logic is provable in Common. This can usually be done by proving that all your axioms and inferences are provable in Common.

Then you can establish (2) completeness. This can be done by the converse, by proving that every theorem in Common is a theorem in your logic. This can usually be done, similar to above, by proving that every inference and axiom of common is a theorem of your logic.


An alternative approach is to establish a model theory semantics for your logic, and attempt to prove the correctness of your logic relative to that. If this approach interests you then you should look up some first order logic proofs of soundness and completeness to get an idea what it is about. If your logic is similar enough to another logic for which the model theoretic soundness and completeness have been established, then this is a lot of unnecessary work. If your logic uses a very unusual language, then this might be necessary.


If you are creating a weaker logic+axioma, then to establish completeness you can reduce it to only theorems in Common minus those missing from yours that need to be established. If you are attempting to create a stronger logic+axioma, that's when soundness makes things get a bit tricky. I would suggest the following as a goal in that case:

Prove for every statement in your language which is both provable and computable, that it computes correctly.

So for example, your logic should never produce $a \not \in \{a, b\}$. If you are producing a logic about arithmetic then the computable statements are the $\Delta_0$ statements in the Arithmetic Hierarchy. If you are producing a logic about sets then the computable statements are the $\Delta_0$ statements in the Lévy Hierarchy (I am not at all familiar with the Lévy hierarchy, someone please comment or correct me if I am wrong about that.) If you are creating some kind of bizarre modal logic, then you have to determine which operators, when given computed inputs, which of them can immediately compute an output. At least the set of theorems made of those operators must be correct.

This is a very challenging thing to attempt, and I'm not even sure how you would begin to attempt such a thing. But if you really do want to create a bizarre logic that is vastly different than the norm, this is where I would recommend starting.

Suppose it should be considered on its own merit, without reference to another system

I'm not sure that even makes sense. You always have to compare it to something, even if it just human intuition. The real question is what is the best thing to compare it to.

Are there any other properties I must verify, or is consistency sufficient to merit publication in a peer-reviewed journal, or, more modestly, start using this framework in my daily work with confidence?

Well those are 2 very different questions. For the first, there are journals that will publish almost anything. And I think any high standards journal is not going to care about "hey I came up with a new logic isn't that neat" because people can just about algorithmically invent new logics. You'd need to establish something important and insightful about your new logic. Or show how it is useful to solve a problem. The days of just showing the profoundness of reducing mathematics to logic ended 100 years ago.

And as far as using a logic for your own personal use, just try to avoid NIH syndrome.

1
On

It sounds from the comments like what you've done is the following: you've taken some existing mathematical problem, and found a framework in which it can be resolved. Now you're looking for a checklist of criteria you can use to argue for that system. I don't really think that such a checklist exists; each system has to be argued for on its own terms, and while it's possible to delineate a couple key points, those points are so subjective that the checklist picture is really misleading.

From the perspective of mere coherence, what you've described is a perfectly reasonable thing to do: results of the form "In such-and-such a framework, here's an argument showing such-and-such a fact" are perfectly meaningful. Certainly there's nothing stopping you from producing lots of such results. However, that leaves open the possibility that your system is trivial (= proves everything; in paraconsistent logic, this is different from mere inconsistency). It also doesn't make those results inherently interesting. Merely resolving a question doesn't justify a system - you need to somehow argue that you got the right answer, or at least a valuable perspective, and that in turn relies on a preceding justification of the system itself. While per my above statement I don't think there's a general template for this, there are a couple key points to hit:

  • It needs to be at least plausibly nontrivial. A proof of consistency relative to some reasonably-accepted system is great; at the very least you need some explanation of why the usual paradoxes don't go through.

  • Each addition to or removal from some pre-existing reasonable system needs to have some interest or justification. If the former, you're claiming that the system you're looking at is worth understanding, regardless of whether it's worth adopting; if the latter, your justification needs to be "non-explosive" - it better not be the case that I can take it and run with it to justify all sorts of nonsense.

All of this is subjective (e.g. what does "reasonably-accepted" mean above?), and so is community response. There's no way to tell how something is going to be received without showing it; nor is there a guarantee that what the community thinks one day will be what it thinks the next day. One important thing here is to ask yourself honestly the "So what?" question: seriously take the position of someone who is extremely skeptical, and try to articulate as best as possible your responses to all the objections you can think of.