When you learn to read and write, you learn that ideas flow on the page from left to right (or right to left, and occasionally from top to bottom, depending on culture). As you start to learn math, you see that it follows the same pattern. An equation like $1+2=3$ reads from left to right (or right to left) and follows the typical form you use to write down anything else. This horizontally-arranged convention follows us into higher mathematics, especially algebra. Multiplication in groups is written horizontally, and there is perennial confusion arising from the difference between right-multiplication and left-multiplication. Exact sequences are arranged horizontally. Etc. Of course not all math is encoded in horizontally-arranged algebraic expressions, but that is certainly a bedrock for a lot of knowledge.
I'm curious if there is any body of thought that examines how our mathematics is shaped or constrained by our conventions for arranging ideas horizontally on a page. Maybe there are even constraints just based on the fact that we're writing things on a page at all! And perhaps this is a case of "you can't know what you don't know". But it would be interesting to collect examples from the history of math when breaking away from these conventions of thinking led to big advances.
As Noah mentioned in the comments, John Baez and other category theorists have been thinking about "alternative writing systems" which can simplify certain algebraic computations. This is usually because there's actually some "higher dimensional" algebra going on under the surface (see here for a discussion).
As one concrete example of this, have you seen the "2 dimensional" proof of Eckmann-Hilton? The idea is that, instead of having two multiplications $\star$ and $\circ$, we instead think of them as "horizontal" and "vertical" multiplication. You can find the proof I'm referencing on the wikipedia page, and you'll probably agree it's the better way to do things.
Now, what does this have to do with "higher dimensional algebra"? The answer depends on your stomach for abstract nonsense.
Perhaps most concretely, the argument shows that the higher homotopy groups of a topological space are abelian. Now, these higher homotopy groups must have at least two dimensions to move around in (that's what makes them "higher") and the Eckmann-Hilton argument is saying that we can shuffle these two cells around in order to get commutativity! This is obviously a two dimensional phenomenon, and the proof becomes much simpler when we allow "two dimensional" algebraic notation to showcase it. You can see more in a (characteristically excellent) answer of Qiaochu here.
Less concretely, this computation looks nicest with "2D algebra" because it's really a computation happening inside a 2-category. In general, computations inside n-categories are best represented with "n-dimensional operations". For instance, when we draw commutative diagrams, we often have "higher dimensional" homotopies witnessing the commutativity. So we may have a cube like this, which you should think of as just the vertices and edges (that is, just the 0 and 1 dimensional structure)
then knowing that the faces commute amounts to "filling in" the faces with 2d-squares, and showing that the whole box commutes fills in the resulting (hollow) cube with a 3d cell.
These kinds of arguments are all "higher dimensional" and this really is their natural setting.
In fact, since category theorists have to do a lot of computations with commutative diagrams (which naturally live in higher dimensions), there's a fairly rich history of coming up with algebraic manipulations that work nicely with these higher dimensional structures. See string diagrams and operads, for instance.
I hope this helps ^_^