Last year, I took a course in logic. In it, sentences were defined inductively as sequences of characters in a certain way. I am now reading Shoenfield as an attempt to look deeper into the subject, and he too defines sentences as sequences of characters.
The first theorem he proves is effectively equivalent to: any sentence can be uniquely parsed into a syntax tree.
That said, why not forego the definition of sentences as sequences, and use syntax trees from the get-go? Is there any benefit to using sequences, other than that being how we are used to writing propositions? Does anyone out there forego sequences entirely?
There are indeed situations where syntax trees are more easy to handle than sentences-as-strings - every induction-on-complexity argument provides such an example. However, the converse also holds sometimes - e.g. it's a bit easier to linearly order the set of sequences, and we often want an enumeration of the expressions in our language.
Ultimately, though, the translation between the two is elementarily enough that their respective technical advantages/disadvantages quickly stop mattering. In light of this, we primarily use sequences since they better match what we actually think of as "sentences." One aspect of this is that I think sentences are generally easier to parse than syntax trees. They're also easier to write on a page - trees take up a lot more space.
In summary, the mathematical differences between the two aren't obviously substantial enough to provide strong motivation in either direction, and sentences seem to have a nontrivial "human advantage" over trees.