Set-theoretic description of stack frames

75 Views Asked by At

I am trying to understand some of the mathematical underpinnings of computing, and I've gotten stuck on functions and stackframes. For example, consider the following bit of code that is executed during a program:

{
  // Stackframe S
  a = 1        // <= type Int
  b = true     // <= type Bool
  c = f(a, b)  // <= type F, the type returned by f

  // Now use a, b, and c...
}

where I've added braces to denote the entirety of the stackframe $S$. After the function $f$ is invoked (where $f: \mathrm{Int} \times \mathrm{Bool} \rightarrow F$) the variables $a$, $b$, and $c$ exist and can be used. From a set-theoretic point of view, it appears that the following relations hold:

\begin{eqnarray} 1. \qquad a & \in &\mathrm{Int} &\wedge& S \\ 2. \qquad b & \in &\mathrm{Bool} &\wedge& S \\ 3. \qquad c & \in & F &\wedge& S \end{eqnarray}

However, this doesn't quite seem right to me. The invocation of f(a, b) appears to suggest the following:

\begin{eqnarray} 4. \qquad f &:& (\mathrm{Int} \wedge S) &\times& (\mathrm{Bool} \wedge S) &\rightarrow& F \wedge (?) \text{, or} \\ 5. \qquad f &:& (\mathrm{Int} \times \mathrm{Bool}) &\wedge& (S \times S) &\rightarrow& F \wedge (?) \end{eqnarray}

Due to symmetry considerations, the question mark should presumably be replaced with $S \times S$, but according to relation 3 above, it appears that it should be simply $S$. So what has happened here? Have I made a false assumption on what is going on whenever the function $f$ is applied on arguments from restricted domains? Or is another operation going on that simply "collapses" the $S \times S$ set to $S$?