Say we are in a first order theory, and one of our inference rules is the associative rule saying that we can infer $(A \vee B) \vee C$ from $A \vee (B \vee C)$. Using the other logical inference rules, how can we obtain the fact that we should be able to infer $A \vee (B \vee C)$ from $(A \vee B) \vee C$?
It seems to me that in a first order theory we should have left and right associativity with the disjunction operator. The inference rule is going one way, how can we obtain the other? Should there be two different inference rules for each case?
The proof may well be just a simple manipulation of logical expressions but I haven't been able to figure it out. According to Shoenfield in "Mathematical Logic" we only need the expansion rule, contraction rule, associative rule, cut rule, and existential-introduction rule.
Using these rules, how could we build up to more sophisticated inference rules, like the distributive rule? Can anyone direct me to a source where this is done? I'm interested in something rigorous or formal, a justification of $A \vee B$ given $B \vee A$, that kind of thing.
Thanks for any responses
See :
Thus, due to the fact that $A∨(B∨C)$ is a tautological consequence of $(A∨B)∨C$, by the above theorem we conclude with the "derived rule" : if $\vdash (A∨B)∨C$, then $\vdash A∨(B∨C)$, complementing thus the "primitive" Associative rule of the system [see page 21].