Math Notation : Is there a clear standard?

1.1k Views Asked by At

Let's take trigonometry as an example

We often see $ \theta = \sin^{-1}(x) $ being used to say what is the angle $ \theta$ such that its sine is $x$. But by similar arguments why can't $\sin^{-1}(x) $ mean $\frac{1}{\sin(x)}$ when $\sin^2(x)$ is usually translated as $(\sin(x))^2$ not $\sin(\sin(x))$

You may have noticed I am not totally confident in these conversations but feel we should have clarity. I'm not convinced we do.

Thoughts?

4

There are 4 best solutions below

0
On BEST ANSWER

This is not the first time this (sort of) question has been asked, and it will not be the last. I'll try to make the answer more general, because the spirit of the question is really: why is there so much variation with mathematical notation?

To the best of my knowledge, there is no single, overarching, universal standard for mathematical notation or even definitions.

The inconsistency with trigonometric ratios that you've observed is related to a larger issue with functional notation in general. $f^{-1}$ generally represents the inverse function. But what does $f^2$ represent? Most would take it to mean the composition $f(f)$, but that is exactly what has caused your confusion when applied to trig ratio notation. For that matter, when you look at other functions like the logarithm, I wouldn't write $\log^2(x)$ without first explaining what you intend it to mean. If you mean $(\log x)^2$, I would just write it that way.

The same sort of inconsistency is prevalent in definitions as well. Is $0$ a natural number? Some definitions say yes, some would say no, which is why we need specific notation like $\mathbb{Z^+}$ to signify only the positive integers, a set which many (but not all) would consider more parsimoniously represented as $\mathbb{N}$. But avoiding ambiguity is more important than trying to be frugal with notation.

Some of the confusion is historical. In the not-so-distant past, $1$ was considered a prime number. You would be hard-pressed to find any modern literature or text that uses that definition.

Another thing I (personally) take issue with is overloaded notation - making the same symbol do double or triple duty. Like $\pi$. That generally represents the well known transcendental constant, but in number theory, it can represent the prime counting function $\pi(x)$. It's a sort of "you know it when you see it" situation. Satisfactory? Depends on how easily one is satisfied, I guess.

Anyway, here's another example of a similar question (rant?) about notation. Please read the answers, they're quite illuminating.

0
On

Since $\theta= \sin^{-1}x$ means $\theta$ is the angle whose sine is $x,$ the notation $\sin^2\theta$ ought to mean $\sin(\sin\theta).$ But unfortunately it has become standard to use that notation to mean $(\sin\theta)^2.$ This is infelicitous notation, but unfortunately it is standard.

4
On

What you are observing here is a clash or overloading of notation. There are only a finite number of symbols that we have to work with, so it is common for this to occur. In this specific example, we have $$ x^{-1} = \frac{1}{x} $$ for real numbers, and $$ f^{-1} = \text{the inverse of $f$} $$ for functions $f$. If you are not overly familiar with the notation of functions (the latter notation), it is not unreasonable to assume that $$ \sin^{-1}(x) = \frac{1}{\sin(x)}. $$ This is not the standard notation, and you will be easily misunderstood if you write it. On the other hand, it is common to leave of the parentheses when working with trig functions, so the obvious "correct" notation for $\frac{1}{\sin(x)}$ (because $\csc(x)$ is silly) also runs the risk of ambiguity: is $$ \sin x^{-1} = \frac{1}{\sin x} \qquad\text{or}\qquad \sin x^{-1} = \sin\left( \frac{1}{x} \right)? $$ In other words, there are three different concepts that all have very similar notation. Because of this, we choose notational conventions, then attempt to apply them consistently. In this case:

  1. It is probably better to use $\arcsin$ for the inverse of the sine function. This is very common notation, and unlikely to be confused with anything else.
  2. It is probably best to use parentheses all the time, and always encapsulate the argument of $\sin$ (or any other function). Hence $ \sin(x)$ is superior to $\sin x$.
  3. It is probably best to write $\frac{1}{x}$ (or $\frac{1}{\sin(x)}$, or better yet in this case, $\csc(x)$), and not use the exponential notation.

With regard to $\sin^2(x)$, I agree that this is also ambiguous notation. Because I work with iterated function systems all day, I automatically think of $f^n$ as meaning the $n$-fold composition of $f$ with itself, i.e. $f\circ f\circ \dotsb \circ f$. That being said, it is extremely common to interpret $\sin^2(x) = [\sin(x)]^2$. Again, I would recommend

  1. Don't write $\sin^2(x)$ to mean $[\sin(x)]^2$—use more parentheses, and make things as unambiguous as possible.
  2. Don't write $\sin^2 = \sin\circ\sin$ without first very clearly indicating that this is what you mean.

These are not universal recommendations, and failing to follow them is not likely to cause major problems (since context usually makes it clear what is going on), but it it not unreasonable to suggest that we strive for clear, unambiguous notation. Good communication is the goal, after all. :)

0
On

When you see $$f^{-1}$$ it means the inverse function.

Thus $$f^{-1}(x)$$ means the value of the inverse function at $x$

When you see $$ f(x)^{-1}$$ it means the inverse of the number $f(x)$ that is $1/f(x)$

To avoid confusion sometimes people use $$\arcsin(x)$$ for $$\sin^{-1}(x)$$

We learn these tricks as we get used to them by practice, but there is room for confusion at first.