Substituitions prove using structural induction

138 Views Asked by At

The exercise is:

Prove that if σ and τ are two substitutions that agree on the variables of the term t, then tσ = tτ. Use structural induction on t.

I understand the principle of substitutions, but I don't really understand the rest.

Does tσ = tτ mean for example: if tσ = f(x,y), tτ = f(x,y) then j(k(t))σ = j(k(f(x,y))) and j(k(t))τ = j(k(f(x,y))) ? I don't really understand what to show.

And I know what structural inducation is, but I don't understand how to use it here.

2

There are 2 best solutions below

0
On BEST ANSWER

Say we have two substitutions $\sigma$ and $\tau$, that is to say two injective maps from alphabet $\Sigma$ to alphabet $\Sigma'$, say.

Instead of writing "$t\sigma = t\tau$", which might be confusing, I prefer to extend $\sigma, \tau$ naturally to a function $\sigma^*$ from the set $Term(\Sigma)$ of terms over $\Sigma$ to the set $Term(\Sigma')$. This allows us to write "$\sigma^*(t) = \tau^*(t)$" instead.

Formally, $\sigma^*$ is defined by induction on the construction of terms over alphabet $\Sigma$ as follows :

  • Case $t = x$ for $x \in \Sigma$ :
    We set $\sigma^*(t) = \sigma^*(x) := \sigma(x)$.

  • Case $t = f(t_1, \dots, t_n)$ for $t_1, \dots, t_n \in Term(\Sigma)$ :
    We set $\sigma^*(t) := f\big(\sigma^*(t_1), \dots, \sigma^*(t_n)\big)$.

Now, it becomes straightforward : since by hypothesis $\sigma = \tau$ (Edit : wrong, the hypothesis is weaker), we have $\sigma^* = \tau^*$. Hence, we get $\sigma^*(t) = \tau^*(t)$, qed.

Suppose $\sigma$ and $\tau$ agree on the variables $V = Var(t)$ of $t$. We show by induction on the construction of $t$ that $\sigma^*$ and $\tau^*$ agree on all terms $t$ whose variables are in $V$ :

  • Case $t = x$ for $x \in V$ :
    We have $\sigma^*(t) = \sigma(x) = \tau(x) = \tau^*(t)$.

  • Case $t = f(t_1, \dots, t_n)$ for $t_1, \dots, t_n \in Term(\Sigma)$ :
    Since $Var(t) = Var(t_1) \cup \dots \cup Var(t_n)$, the variables of $t_i$ belong to $V$ for all $i$. We can hence apply the induction hypothesis and we get $\sigma^*(t_i) = \tau^*(t_i)$ for all $i$. But now, $$\sigma^*(t) := f\big(\sigma^*(t_1), \dots, \sigma^*(t_n)\big) = f\big(\tau^*(t_1), \dots, \tau^*(t_n)\big) = \tau^*(t)$$

6
On

$t$ is a term i.e. either a variable ($x,y,\ldots$) or a constant $c$ or a "complex" term built with a function symbol (e.g. $fx$).

$\sigma$ is a substitution and $t \sigma$ means the result obtained applying substitution $\sigma$ to term $t$.

Example: for $\sigma := [x \leftarrow y]$ and $t := fx$ the term $t \sigma$ will be:

$fx [x \leftarrow y] = fy$.

Thus, to prove that $t \sigma = t \tau$ means to show that the terms obtained from the two substitutions applied to term $t$ are the same.

To prove it by induction means to use induction on the complexity of the term $t$:

(i) Base case: either $t$ is $x$ or $t$ is $c$.

Both sub-cases are obvious, because $\tau$ and $\sigma$ agree on the free variables of $t$.

(ii) Induction step: assume that property holds for terms: $t_1,\ldots,t_n$ and prove it for term: $ft_1 \ldots t_n$.

Assume for simplicity $n=1$, i.e. $t := f t_1$. The induction hypotheses is that $t_1 \sigma = t_1 \tau$. Thus:

$t \sigma = (f t_1) \sigma = f (t_1 \sigma)$, because substitutions do not act on function symbols, $= f (t_1 \tau)$, by induction hypotheses, $= (f t_1) \tau= t \tau$.