I have much more experience programming than I do with advanced mathematics, so perhaps this is just a comfort thing with me, but I often get frustrated when I try to follow mathematical notation. Specifically, I get frustrated trying to keep track of what each variable signifies.
As a programmer, this would be completely unacceptable no matter how many comments you added explaining it:
float A(float P, float r, float n, float t) {
return P * pow(1 + r / n, n * t);
}
Yet a mathematician would have no problem with this:
$A = P\ \left(1+\dfrac{r}{n}\right)^{nt}$
where
$A$ = final amount
$P$ = principal amount (initial investment)
$r$ = annual nominal interest rate (as a decimal)
$n$ = number of times the interest is compounded per year
$t$ = number of years
So why don't I ever see the following?
$\text{final_amount} = \text{principal}\; \left(1+\dfrac{\text{interest_rate}}{\text{periods_per_yr}}\right)^{\text{periods_per_yr}\cdot\text{years}}$
We are very, very lazy. I am very, very serious about this.
NB1: The history is told in Florian Cajori's book on the history of notation. In very old times, there were no variables (and no formulas, really) and everything was incredibly verbose. Cajori's book beautifully shows the very long and tortuous way from that to modern day notation for variables; there are several sections regarding the notation of unknowns and of their powers.
NB2: Additionally, we usually deal with very complicated expressions, so using verbose names for variables you render things almost impossible. Writing down the formula for Gaussian curvature in terms of $E$, $F$, $G$ and the Christoffel symbols if we wrote $\mathsf{Christoffel}^i_{jk}$ instead of $\Gamma^{i}_{jk}$ would turn differential geometry into a dead subject very soon :P