I am trying to write down a proof in first order logic detailed enough, so that i can write a program verifying that each step is justified by a rule of inference. As an example I tried to prove the symmetry of congruence in Tarskis Axioms of elementary Euclidian geometry. The predicate $C(x,y,r,s)$ means "Segment $xy$ is congruent to segment $rs$".
The last step in my proof is
- given $\forall x,y:C(x,y,x,y)$ (previously proven)
- given $\forall x,y,r,s:(C(x,y,r,s)\wedge C(x,y,x,y))\to C(r,s,x,y)$ (previously proven)
- assume $C(x,y,r,s)$.
- deduce $C(x,y,x,y)$ (Universal Specialisation of 1.)
- deduce $(C(x,y,r,s)\wedge C(x,y,x,y))\to C(r,s,x,y)$ (Universal Specialisation of 2.)
- deduce $C(x,y,r,s)\wedge C(x,y,x,y)$ (Conjunction of 3. and 4.)
- deduce $C(r,s,x,y)$ (Modus Ponens of 5. and 6.)
- deduce $C(x,y,r,s)\to C(r,s,x,y)$ (Conditional proof using assumption 3.)
- deduce $\forall x,y,r,s: C(x,y,r,s)\to C(r,s,x,y)$ (Universal Generalization)
Is there a simpler proof, e.g. without removing the quantifiers and adding them again, or without having to assume step 3.?
Is the following a rule of inference? $${(A\wedge B)\to C\atop B}\over\therefore A\to C$$
I agree with Bram28's answer, but I'm going to go into more specifics. Before that, note that there is an entire field of logic/math that studies these kinds of questions called proof theory, and particularly structural proof theory. There is also a lot of research in the CS community on implementing theorem provers and proof assistants.
It's not clear to me why you want to be able to "big" steps at once. As others have mentioned, it certainly makes it harder to have confidence in (or prove) the verifier correct. From the end-user's perspective, as long as you provide a means to define derived rules, there is little additional complexity for the end-user. At worst, you can provide more complicated rules as derived rules, essentially short-hands for applications of a series of other rules. It makes no difference to the user whether a rule is derived or primitive.
Depending on how you want your users to interact with your system though, how you structure your primitive proof rules can have an impact. Proof systems like the sequent calculus and natural deduction and more generally proof systems that support cut-elimination and the subformula property have a kind of "normal form". The benefit of this is that at each point in time only one or two rules may apply which significantly reduces the search space and avoids wasting time exploring superficially different proofs. (Unfortunately, many instances of a rule may apply so there can be multiple distinct proofs of a theorem and search is still needed and still grows quickly, but large chunks of a proof can easily be completely determined purely by the structure of the theorem.)
Now you definitely want Cut. It corresponds to using lemmas (or in programming terms, subprocedures). (Cut-elimination corresponds to inlining every subprocedure.) But it's good to know that no proof ever requires it because it means you can ignore it during proof search, which is good because it's awkward to deal with in proof search. These benefits apply to the end-user as well.
So having a set of rules with these properties simplifies verifying the correctness of your implementation and simplifies and improves performance of proof search. A countervailing force, especially if you don't intend to provide any proof search capabilities, is verifying a proof with many steps is more expensive than one with fewer steps, all other things being equal. Of course, other things aren't equal and whether having more but simpler steps is better or worse than fewer but more complicated ones requires profiling. Nevertheless, it's often advantageous for performance and for usability to have "bulk" forms of certain rules. For example, instead of applying universal generalization one universal quantifier at a time, it is likely better to allow an arbitrary number of universal quantifiers to be handled in a single step. $N$-ary forms of conjunction and disjunction are another possibility. (In type theory based systems, this is somewhat accomplished by records and inductive data types.) Another manifestation of this is the rule sets are usually not minimal. For example, all of $\{\neg,\land,\lor,\to,\exists,\forall,\bot,\top\}$ may be directly supported as opposed to just $\{\neg,\land,\forall\}$ say.