Any sound and complete formal system in propositional logic consists of:
1.- A finite number of Variables (Symbols used as placeholders for proposition).
2.- At least one logic connective
3.- At least one Rule of inference (For example, Modus Ponens)
4.- A set of axioms
Any proposition stated using the variables and connectives should be provable from the axioms using the rule/s of inference.
Using these ingredients, one may construct many valid formal systems of propositional logic using different combinations of connectives, rules of inference and axioms.
For example, a famous axiomatization by Jan Łukasiewicz uses the connectives → and ¬ , Modus Ponens and the following three axioms:
ϕ→(ψ→ϕ)
(ϕ→(ψ→ξ))→((ϕ→ψ)→(ϕ→ξ))
(¬ϕ→¬ψ)→(ψ→ϕ)
Using this axiomatization as an example, my question is the following:
Are the truth tables for → and ¬ taken for granted when choosing said connectives as fundamental, or are they constructed from the axioms and modus ponens?
In other words, are truth tables part of the Definition of logic connectives, or is the definition of said connectives completely included in the axioms and rules of inference (with truth tables being merely a consequence of something more fundamental)?
The simple answer to the question in the header is yes.
When you read original sources, you come to realize that there had been no distinction between "$=$" and "$\leftrightarrow$" in the early writings of Frege and Russell. While Frege clearly had been struggling with a new logical calculus based upon the function concept, Russell and Whitehead had been working on axiomatic systems.
I have seen attributions to both Post and Wittgenstein concerning the identification of truth tables and propositional logic as a self-contained subsystem of what had been presented in "Principia Mathematica". So, truth tables had been deduced from the axiomatization of classical logic. But, I do not know to whom one ought attribute priority.
Your subsequent questions are more difficult. The answer depends upon your philosophy of mathematics.
Frege's innovation had been that of bringing the function concept into logic. Truth tables reflect an "extensional" view of a function having a domain. This is why one refers to connectives associated with the classical truth tables with the qualifier "material". Frege's "The True" and "The False" are intended to be viewed as extant objects.
But, because the innovation involves the function concept, one can axiomatize logical connectivity with an "intensional" view. Let me emphasize logical connectivity because I am not speaking of Boolean polynomials.
Let me denote the material biconditional by $LEQ$, and, consider an axiom given by
$$ XOR\;(\: OR, \; NAND \:) = LEQ $$
The truth tables for $OR$ and $NAND$ are
$$ \begin{array}{cccc} \begin{array}{cc|c} \text{$\:\:\:\:$} & \text{$\:\:\:\:$} & \text{$OR$} \\ \hline \text{$T$} & \text{$T$} & \text{$T$} \\ \text{$T$} & \text{$F$} & \text{$T$} \\ \text{$F$} & \text{$F$} & \text{$F$} \\ \text{$F$} & \text{$T$} & \text{$T$} \\ \end{array} &&& \begin{array}{cc|c} \text{$\:\:\:\:$} & \text{$\:\:\:\:$} & \text{$NAND$} \\ \hline \text{$T$} & \text{$T$} & \text{$F$} \\ \text{$T$} & \text{$F$} & \text{$T$} \\ \text{$F$} & \text{$F$} & \text{$T$} \\ \text{$F$} & \text{$T$} & \text{$T$} \\ \end{array} \end{array} $$
If one now performs a componentwise application of $XOR$,
$$ \begin{array}{cc|c} \text{$\:\:\:\:$} & \text{$\:\:\:\:$} & \text{$\:\:\:\:$} \\ \hline \text{$T$} & \text{$T$} & \text{$XOR \;( T,\: F )$} \\ \text{$T$} & \text{$F$} & \text{$XOR \;( T,\: T )$} \\ \text{$F$} & \text{$F$} & \text{$XOR \;( F,\: T )$} \\ \text{$F$} & \text{$T$} & \text{$XOR \;( T,\: T )$} \\ \end{array} $$
one obtains the truth table for $LEQ$,
$$ \begin{array}{cc|c} \text{$\:\:\:\:$} & \text{$\:\:\:\:$} & \text{$LEQ$} \\ \hline \text{$T$} & \text{$T$} & \text{$T$} \\ \text{$T$} & \text{$F$} & \text{$F$} \\ \text{$F$} & \text{$F$} & \text{$T$} \\ \text{$F$} & \text{$T$} & \text{$F$} \\ \end{array} $$
Of course, to implement this one must actually name all sixteen truth tables.
It is now possible to understand the system as an applicative structure. This requires that a set of parentheses be interpreted using either $NOR$ or $NAND$. That is, a string of names,
$$ ABCDEFG $$
evaluates according to
$$ ((((((AB)C)D)E)F)G) $$
Along similar lines one may formulate a magma by treating $NOR$ and $NAND$ as a left product and a right product if one sets a convention for expressions of the form $(A)$. My view is that it ought to be interpreted as a unary negation.
The reason that all of this depends upon one's philosophy of mathematics lies with the question of what might warrant the study of this system within foundations. Suppose you name all sixteen truth tables. Further, suppose you study negations and de Morgan conjugations as involutions on this system. This will partition the system with a specific pattern, and, that pattern corresponds with the finite affine geometry on sixteen points. Its associated projective plane has twenty-one points. It is well known in design theory that the 21-point projective plane is unique up to isomorphism. But, modern foundations arises almost exclusively from the arithmetization of mathematics. So, this geometric perspective is at odds with the received view.
At the end of his career, Frege retracted his logicism in a paper entitled "Numbers and Arithmetic". He expresses the belief that all mathematics arises from a geometrical basis. He concludes this paper with the remark,
In other words, there are some questions in the foundations of mathematics that you can only answer for yourself.
@ Doug Spoonwood
The fact that the modern account of the syntax/semantics distinction creates a chicken and egg problem does not alter the fact of the historical reality that axiomatizations preceded the account of truth tables. Also, the expression "propositional logic" usually does not refer to non-classical interpretations when no other context is specified. For no good reason you are either reading too much in the question or too little in the meaning of "simple".