Why the Definability Lemma may fail for class forcing

87 Views Asked by At

It is known that over GBC, the Definability Lemma may fail some class forcing $\mathbb{P}$. I'd like to understand why the usual way of defining $\Vdash$ doesn't work.

For example, we may naively define $p \Vdash \pi \in \tau$ by transifine recursion as follows.

$p \Vdash \pi \in \tau$ iff $\{q \leq p : \exists \langle \sigma, r \rangle \in \tau (q \leq r \land q \Vdash \pi = \sigma) \}$ is dense below $p$.

In Friedman's "Fine Structure and Class Forcing", the explanation of why this doesn't work is that since $\mathbb{P}$ is a proper class, the quantifier complexity of the definition may increase with the ranks of $\tau$ and $\pi$. But I don't understand this explanation. Why does the complexity of the quantifiers matter in this definition?

My own understanding is this. The definition uses a recursion on the class $\mathbb{P} \times \mathcal{L}_{\mathbb{P}}$ ($\mathcal{L}_{\mathbb{P}}$ is the forcing langauge), so this requires a set-like well-founded relation $R$ on $\mathbb{P} \times \mathcal{L}_{\mathbb{P}}$. But whenever $\sigma$ is in the domain $\tau$, then for every $q \in \mathbb{P}$, we would want $(q, \pi = \sigma)$ to be below $(p, \pi \in \tau)$ in the sense of $R$. Thus, there may not be such $R$ because $\mathbb{P}$ is a properclass.

Is my understanding correct? If so, what is the relation between Friedman's explanation and mine?