Why is $\alpha \rightarrow \forall x(\alpha)$ not generally correct in first-order logic?
i.e., when there are free occurrences of $x$ in $\alpha$, and, on the same point, why is the formula scheme correct when there are no free occurrences of $x$ in $\alpha$?
About the rule :
consider the following example :
let $\varphi$ be $x = 0$, where $0$ is an individual constant, and $\Gamma = \{ \varphi \}$. Then, from $\varphi$ we can construct the following derivation :
The last statement is false in any structure with at least two elements. [See also Stephen Cole Kleene, Mathematical Logic (1967), page 119].
If we have $\Gamma = \emptyset$ we can derive the following rule :
but in this case $\varphi$ is nor more an assumption : it must be a theorem.
In order to show that the formula $\varphi \rightarrow \forall x \varphi$ is not in general correct, consider the following example :
and take as the domain of the interpretation the set $\mathbb Z$ of integers, and let the interpretation of the predicate $P$ be “$x$ is greater than $0$”.
The formula :
is not valid, because in $\mathbb Z$ not all the numbers are greater than $0$.
We want that our calculus is sound, i.e. that it allows us to prove only valid formulas, where valid means true in every interpretation. The above formula is not true in the aforesaid interpretation, so it is not valid.
Consequently, we do not want it in our calculus, and we must set up the rules accordingly, so that it should not be derivable.