One thing that has bothered me so far while learning about the Peano axioms is that the use of parentheses just comes out of nowhere and we automatically assume they are true in how they work.
For example the proof that $(a+b)+c = a+(b+c)$. Given some base case $(a+b)+0 = a+b = a+(b+0)$ we have for an inductive step:
$(a+b)+S(c) = S((a+b)+c) = S(a+(b+c)) = a + S(b+c) = a + (b+S(c))$
But for some reason this bothers me because we haven't really explained yet at all how the parentheses are meant to be treated in the first place. Do we need an additional axiom or definition for this? Is this mentioned anywhere? Or do we just sort of take them for granted?
In formal language theory (most relevantly, context-free languages), there is the notion of an abstract syntax tree. A decent chunk of formal language theory is figuring out how to turns flat, linear strings of symbols into this more structured representation. This more structured, tree-like representation is generally what we are thinking of when think about well-formed formulas or terms. For example, when we consider doing induction over all formulas, it is a structural induction over abstract syntax trees, not strings.
For the relatively simple and regular languages often used for simple formal logics, it is relatively easy to describe the syntax with a context-free grammar and/or describe an algorithm that will take a string and parse out the abstract syntax tree if possible. For something like $S(a+(b+c))$ this would produce a tree like:
Of course, if I wanted to represent this tree in linear text, i.e. I wanted to serialize it, I would need to pick some textual representation, and in this case the original expression,
S(a+(b+c)), would be one possible representation. Another would be Polish notation,S+a+bc. Another would be S-expressions,(S (+ a (+ b c))). Another would be something like standard notation but eschewing infix operators,S(+(a,+(b,c))).Ultimately, what I would strongly recommend is that you think of the abstract syntax trees as primary and the pieces of text you see in books as just serializations of these trees forced upon us by practical limitations. The need to concern ourselves with parsing at all in logic is, to a large extent, just an artifact of the medium we're using. From the perspective of abstract syntax trees, associativity looks like:
Humans are rather good at parsing languages, so it's not generally a barrier to understanding. It is obviously unnecessary to understand context-free grammars to confidently and correctly read most formal notation. If you are interested in parsing, by all means read up on it, but parsing isn't too important to logic so I would not get hung up on it. (Formal languages and formal logic do have a decent overlap in concepts and techniques though.) From a philosophical perspective, worrying about parsing formal languages (an impression some books give) while reading sentences in a massively more complicated natural language is a touch silly.