When motivating the study of linear algebra, authors typically state that linear algebra is ubiquitous in almost every branch of mathematics, and science in general.
The canonical example is the system of linear equations: physical/mathematical systems can often be described by a set of linear equations, and linear algebra gives us the tools to solve them.
To me this example is trivial and unconvincing because linear algebra hardly brings anything new to the table besides a systematic set of notations and a marginal gain in computational efficiency. Indeed, given a set of $n$ equations with $n$ unknowns, I could just as well solve it the old-fashioned way via Gaussian elimination without uttering the word "matrix" once, and with a complexity of $O(n^3)$. On the other hand, the best algorithms for inverting matrices are $O(n^{2.376})$ which is a considerable improvement but not groundbreaking to me in my everyday life. Besides, these algorithms are impossible to carry out by hand.
This brings me to my question: what are some examples of applications of linear algebra to branches of mathematics that genuinely shed a new light on a problem, that allow us to do things that would have been impossible otherwise?
We aren't using linear algebra to solve equations because that is all linear algebra can do. We are using it to solve equations to get a new perspective on a problem we are familiar with, with concise notation, and an entirely different ruleset for manipulation.
You say rewriting a set of linear equations into the form $Ax= b$, finding $A^{-1}$ and solving by $x = A^{-1}b$ is a trivial rewrite? Perhaps, but there is a lot of non-trivial equation solving to linear algebra as well.
For instance, if you have more equations than variables (which is quite common in applications), which is to say that the coefficient matrix $A$ has more rows than columns, then you can't simply take the inverse of $A$. However, you can multiply on both sides from the left by $A^T$ to get $A^TAx = A^Tb$, and take the inverse of $A^TA$. The fact that $x = (A^TA)^{-1}A^Tb$ is a least-squares approximation is definitely not trivial.
If you want an application that is not equation solving, then consider, for instance, the singular value decomposition, which when used on a matrix representing the pixel values of a greyscale image allows you to decompose the pixel values of an image into an "edge" part and a "colour" part. Or the adjacency matrix of a graph, giving us entirely new branches of graph theory where properties of a graph is encoded in the eigenvalues and eigenvectors of that matrix.