Why do we often (as in Rudin's Principles of Mathematical Analysis, p.$~206$) write $A\vec x$ to mean $A(\vec x)$ for linear transformations $A$?
I think it's because we can write $A(\vec x+\vec y)=A\vec x+A\vec y$ and pretend like we have a multiplication that is distributive over addition, which makes thinking about linear transformations easier. My friend insists it's because it simplifies things to identify $A$ with its matrix, and we tend to not write parentheses when multiplying matrices with vectors.
I can't speak for Rudin, but to me this looks like a writer who is essentially identifying a matrix with a linear transformation
I would write $A\vec{x}$ (to mean a matrix multiplied by a vector) and I would not use parentheses.
Then I usually separately define a linear transformation as $T(\vec{x}) := A\vec{x}$ and then always use parentheses for that, just like any other function (i.e. you always write '$f(x)$' ...'$fx$' just looks like nonsense)
Since it's an analysis book this not a big deal. But if you were teaching linear algebra, you'd want to emphasize the distinction