I'm new to the subject, but I'm currently reading into lecture notes about propositional- and predicate logic. I'm especially interested in the borderline between logic and language. Please correct, if some of the things I state here are wrong.
One can make statements about anything in the world being imaginable, and this statement (whatever a statement is in some language) can be true or false (or possible something in between). We have an intuitive understanding about the validity of some statements we can make in a language, provided other statements are already true.
All cats live on earth.
Simon is a cat.
THEREFOR Simon lives on earth.
I understand logical systems to formalize this process of determining the validity of a sentence (no matter wether it describes a cat or some manifold) - correct me, if I'm wrong here.
AFAIK, when "inventing" a logical system, we write down some definitions (how are certain objects called, for example logical symbols, predicates, or formulas, or what is their structure). This is fine for me, because definitions are just an agreement on how to call something. My brain is kind of powerful enough to live in a world where I call the objects I perceive the way I want to call them).
Next, I write down how true and false statements follow from previous sentences. My current understanding is as well that I have to assume these rules, they can't be deduced from any meta language or principle. One simply has to start somewhere. Is that right?
At this point, most lecture notes I encountered begin talking about things like soundness, completeness, or consistency, and the equivalence of the syntactic and the semantic truths. And they begin to draw conclusions about the logical system.
My question now is: For any statement of the logical system which isn't either a definition or one of the deduction rules of the logical system, do I only employ the deduction rules of the logical system to prove them - or do I have to use some kind of intuitive meta logic (the one I talked about in the beginning) to prove them?
You could think of logic, e.g., first order/Predicate logic, as a game that you play to produce new propositions. Like any game, you need to start somewhere; you need starting pieces and basic rules, so to speak. In the case of Predicate, the starting pieces are the propositions, constructed from constants, variables, quantifiers, predicates, and logical operators. The “rules of play” are then the rules of inference/deduction. They aren’t god-given or self-evident, i.e., aren’t canonical; people choose what rules to play with based on their goals and beliefs (cf. natural deduction vs sequent calculus vs Hilbert system). As an example, some people allow Predicate to have the Law of the Excluded Middle, while many others refuse it. In a system of the former type there will be propositions that follow non-constructively from the axioms, while in the latter there might not (because, for instance, an argument for Q of the form $(P \vee \neg P) \Rightarrow Q,\, \therefore Q$ might not exhaust all cases on $P$).
So, in short, like playing a game, you must use the established rules of inference, on whatever kinds of propositions are allowed, to produce new propositions that the system (Predicate, e.g.) accepts/acknowledges. In fact, there are many gamifications of logic that make what I said about quite literal, one of which is here.
Edit(To better address the question of whether one must only use the axioms when producing theorems): You could "break the rules", so to speak, and use a non-axiomatic/theorem statement to "prove" things, but you can't be guaranteed that it is a valid rule of inference unless you accept it as one or later deduce it from the axioms. This lead to, for example, the adoption of the Axiom of Choice into the ZF system of set theory (creating ZFC) because many "proofs" involved choice functions whose existence couldn't be guaranteed.