How can types represent both sets and propositions in Lambda calculus?

459 Views Asked by At

There is an interpretation of Lambda calculus that views a derivable statement $\Gamma \vdash x : A$ as the proposition $A$ with $x$ as "proof" of $A$.

However, there is another interpretation in which $\Gamma \vdash x : A$ is interpreted as the statement $x \in A$.

I have made my most important questions bold, but they can all be summarized with the question: How do these two interpretations relate to each-other?

Lets say $A$ is the type $\texttt {nat}$ for natural numbers. Its easy to see how $\Gamma \vdash 3 : \texttt{nat}$ could be interpreted as the statement $3 \in \mathbb{N}$, but how could it be interpreted as a proposition?

How is $\texttt{nat}$ a proposition in and of itself? And furthermore, how is $3$ a "proof" of $\texttt{nat}$?

Is $\texttt{nat}$ equivalent to the proposition "$\mathbb{N}$ is non empty", and similarly, is producing an example of a member of $\mathbb{N}$, such as $3$, the "proof" of that proposition?

If not, how is my thinking incorrect? If so, are there other interpretations of $\texttt{nat}$ as a proposition?

Looking at this from the other direction, you could have a $\Pi$-type in $\lambda P$ such as $\Pi x : \texttt{nat}\ .\ P$ where $P$ is the predicate "$P(x) = x$ is a natural number"

How could the proposition $P[x := y]$ (i.e. "$y$ is a natural number") be interpreted as a set? What is $P[x := y]$ a set of? Is it a set of proofs? That doesn't seem very intuitive to me.


Edit for clarification: It's important to note that, as far as I understand it, in lambda calculus, the predicate $P(x)$ is not in and of itself a set, it is a family of sets. Again, $P(x)$ is represented by a $\Pi$-type, not a type.

As far as I understand it, the predicate $P(x)$ would be written $\Pi x : \texttt{nat}\ .\ P $, which represents "$\forall x \in \mathbb{N}, P(x)$" which is typed as $\texttt{nat} \to *$. It is not until we apply a specific value to $P(x)$, i.e. $(\Pi x : \texttt{nat} . P) 3 \to_\beta P[x := 3]$, that it becomes typed as $*$, and is therefore an actual type (a.k.a. set).

So "$x$ is a natural number" would make sense as a set (the set of all natural numbers), but this is not how it is interpreted. Instead, "$x$ is a natural number" is a family of sets, and a specific instance of it (e.g. "3 is a natural number") is a set. How can something be a member of the set "3 is a natural number"? What does that even mean?

This is what is mostly confusing me in the second example mentioned above.

2

There are 2 best solutions below

0
On BEST ANSWER

How do these two interpretations relate to each-other?

Going in the details would require a course in type theory, if you want to deepen your knowledge of the subject you should search reference on the Curry-Howard isomorphism.

An intuitive not too precise explanation of the relation between these interpretations is the following.

Generally to every proposition (not a predicate) can be associated a set: the set of its proofs.

This association has the property that it turns connectives into type formers: if we let $A$ be a proposition and denote with $[A]$ the associated set then we have that $[A \land B]=[A]\times[B]$, $[A \lor B]=[A] \amalg [B]$ and $[A \to B]=[A]\to[B]$.

There is more: the inference rules, that can be regarded as operations between proofs, correspond to special operations of the corresponding types. As an example the inference rules

  1. if $p$ is a proof of $A \land B$ then there is a proof $\pi_A(p)$ of the proposition $A$
  2. if $p$ is a proof of $A \land B$ then there is a proof $\pi_B(p)$ of the proposition $B$

become the canonical projections $\pi_{[A]} \colon A \times B \to A$ and $\pi_{[B]} \colon A \times B \to B$.

The interesting point is that these sets corresponding to propositions are those freely generated by a set of variables and closing for certain operations (the so called constructors).

So you get this correspondence between the propositions of propositional logic (and relative proofs) and these sets (and their elements), this correspondence turns out to be bijective.

This correspondence can be extended to include predicates by introducing family of sets (i.e. set-valued functions).

This brings us to the next question.

How could the proposition $P[x:=y]$ (i.e. "$y$ is a natural number") be interpreted as a set?

Predicates are not interpreted as sets, since they are not propositions.

A predicate is a family of propositions indexed by its possible arguments: a predicate $P(x)$ is a function that to every value $a$ (that can be assigned to $x$) associates the proposition $P(a)$.

If you get that, it should be obvious that a predicate should be interpreted not as a set but as a family of sets: i.e. a function sending every value of its argument into a set.

If $P(x)$ is a predicate, its interpretation is the function/family $[P(x)]$ defined by the equation $$[P(x)](a) = \text{ the set of proofs of } P(a)$$ that is a set-valued function.

How is nat a proposition in and of itself? And furthermore, how is 3 a "proof" of nat? Is nat equivalent to the proposition "N is non empty", and similarly, is producing an example of a member of N, such as 3, the "proof" of that proposition?

To answer this question one should first specify what is the type system and the logic you are considering here.

The point here is that $\mathbf{nat}$ is a simple type, so it should correspond to a proposition in some logic. The problem is that it does not correspond to any proposition of any common used logic system.

For instance there is no proposition in the propositional logic that correspond to it. That is because $\mathbf{nat}$ does not belong to simply typed lambda calculus (STLC), it belongs to an STLC with natural numbers (an extension of STLC).

You could extend the propositional logic in a way to give a proposition corresponding to $\mathbf{nat}$. This would be an extension of propositional logic with a new constant proposition ($\mathbf{nat}$) with the following inference rules:

  • there is a proof of $\mathbf{nat}$ named $0$
  • is $n$ is a proof of $\mathbf{nat}$ then there is also another proof of $\mathbf{nat}$ named $n+1$
  • if $p$ is a proof of $A$ and if $g$ is a proof of $A \to A$ then there is a proof of $\mathbf{nat}\to A$.

Such logic would not have really useful applications, since usually when applying logic we are interested in finding one proof for a proposition, we do not necessarily need more proof, especially if the new proofs are more complex than the one we already have.

Hope this helps.

P.s. note that the type $\prod x\colon \mathbf{nat}. P(x)$ does not have type $\mathbf{nat} \to *$ but it has type $*$, in the very same way as $\forall x \in \mathbb N. P(x)$ is a proposition not a depending-predicate.

0
On

I'm going to wade into this question in the hopes that it helps at least a little, or that it incites a real type theorist to write a better answer.

Forget about dependent types for a moment, and consider simple types. The first question you should ask is, if we are interpreting types as sets, what syntactic things should be interpreted as elements of those sets? Surely, the closed terms should qualify, if nothing else. We can see this intuition at work in the naive interpretation of the simple type $\mathtt{nat}$; $0$, $s0$, $s(s0)$, etc. are all naturally interpreted as the obvious natural numbers when we interpret $\mathtt{nat}$ as the set $\mathbb{N}$. Certain type theories resist interpretation of types as sets and terms as functions (particularly the ones with enough polymorphism), but types are always set-like in the sense that talk of closed terms, and of possible pluralities of closed terms, always makes sense.

How are simple types like propositions? Only in that if you interpret the sentences of your propositional logic as types in an appropriate way, a proposition is (intuitionistically) provable exactly when the corresponding type has a closed term. That's really it. Being a proposition isn't something types do intrinsically; rather types are amenable enough to a certain kind of interpretation that we can use types to reason correctly about propositions as if one were the other.

The generalization to dependent types is a bit more complicated in technical details, but the idea is the same. If we have a type theory with a dependent family $R:A\to A\to *$ and a constant $\mathsf{sym}:\Pi x,y:A.R(x,y)\to R(y,x)$, then we can use this type theory to interpret the (intuitionistic) first order theory of symmetric relations. If care is taken, we can use the fact that we've derived certain closed terms as evidence that certain theorems are true. In this sense we can again treat types as propositions. But if the type theory (and definition of $R$) has enough going on, there may very well be other closed terms of $\Pi x,y:A.R(x,y)\to R(y,x)$, or closed terms $a,b:A$ for which $R(a,b)$ has closed terms (this happens often if $R$ is an equality type in homotopy type theory, for example).

The thing to keep in mind is that you're going to confuse yourself if you ask "If $P(n)$ is a type family that means [whatever], do I interpret it as a set?" This has only the generic answer "because it's a type and in principle can have terms, which might be interpretable as elements of a set"; to see whether there actually are closed terms and what those might be (i.e. what might be "in that set"), too much depends on details. What type theory are we talking about? What, specifically, is $P(n)$, and what does the theory prove about it that makes it a good formalization of [whatever]? Unless you answer these, it's impossible to give you a specific set-like interpretation of an arbitrary logical predicate.