I've come across what is, to me, the most precise, beautiful and thorough definition of what we know of as the angle between two vectors. I say this because most literature either skims over things and starts talking about angles all of the sudden, or uses a contrived definition like the unique $\theta\in[0,\pi]$ such that $\|u\|\|v\|\cos\theta = u\cdot v$. Yes it works fine, but it leaves me quite unsatisfied; I should like to already have an "intrinsic" definition of an angle, and then let the cosine function be defined to tell me things about angles. Why, the other way around we have to define cosine by some magical power series, and then it turns out that the above definition makes angles behave as they should! To me it seems disingenuous, but thoughts on this are welcome.
Anyway, the construction comes from a book called Àlgebra Lineal i Geometria (Castellet/Llerena), and I'd like to know if anyone has seen it, other than in this book of course. I'll post the beginning of the section (translated from Catalan, and paraphrased):
Let $(E,\langle\cdot,\cdot\rangle)$ be a $2$-dimensional Euclidean space. In the set of pairs of unit vectors, define the equivalence relationship $$(u,u')\sim(v,v') \iff \exists f\in SO(2) : f(u)=v,\, f(u') = v'$$ This condition is proven to be equivalent to $\exists g\in SO(2) : g(u)=u',\, g(v) = v'$. We define an angle as one of these equivalence classes. We denote the class represented by $(u,u')$ as $[(u,u')] = \widehat{uu'}$. This can be easily extended to $E\times E$ by defining $\widehat{uv}$ as the angle defined by $\frac{u}{\|u\|},\frac{v}{\|v\|}$. Call the set of angles $A =E\times E_{\large{/\sim}}$ and for any $u\in E$ define a map $$SO(2)\longrightarrow A \atop \qquad\qquad\, f\mapsto \widehat{uf(u)}$$ This is in fact a bijection, and allows us to transport the operation in $SO(2)$ to $A$: given $\alpha,\beta\in A$ with preimages $f,g$ respectively, define the sum $\alpha+\beta$ as the image of $f\circ g$. In class notation: $$\left.\begin{align}&\alpha = [(u,f(u))] \\ &\beta = [(f(u),gf(u))]\end{align}\right\}\Rightarrow \alpha+\beta = [(u,gf(u))]$$ Naturally, this sum has the same properties in $A$ as does the operation in $SO(2)$. $A$ is then an abelian group, whose identity is $0 = \widehat{uu}$. The inverse, or opposite angle of $\widehat{uf(u)}$ is $\widehat{f(u)u}$.
Here comes the fun part.
By fixing an orientation on $E$, each $f\in SO(2)$ has a corresponding matrix $$\left(\begin{array}{ccc}a & -b \\ b & a\end{array}\right)\text{ with } a^2+b^2 = 1$$ Let $\alpha$ be the angle corresponding to $f$. We define the cosine, and the sine, of $\alpha$ by $$\cos\alpha = a\qquad \sin\alpha = b$$
Too clever. And it gets better:
We'll make a couple of observations now about this definition. Firstly, in changing the orientation of $E$ the sign of $\sin\alpha$ changes, but not $\cos\alpha$.
One has $$\cos0 = 1\qquad \sin0 = 0$$ since the angle $0$ corresponds to $\mathrm{id}$.
The angle $-\alpha$ (the opposite wrt the sum) corresponds to $f^{-1}$, whose matrix is the transpose of $f$'s matrix; therefore: $$\cos(-\alpha) = \cos\alpha\qquad \sin(-\alpha) = -\sin(\alpha)$$
The angle $\alpha + \beta$ corresponds to the composition of their respective maps. Thus, matrix multiplication gives: $$\begin{align}\cos(\alpha+\beta) = \cos\alpha\cos\beta-\sin\alpha\sin\beta \\ \sin(\alpha+\beta) = \sin\alpha\cos\beta+\cos\alpha\sin\beta\end{align}$$
To finish I'll put some subsequent propositions without proofs.
There exists one, and only one angle $\pi$ such that $\pi+\pi = 0$. $\pi$ is the angle such that $\cos\pi = -1$ and $\sin\pi = 0$.
There exist two, and only two angles $\delta_1,\delta_2$ such that $\delta_i + \delta_i = \pi$. $\delta_i$ are the angles such that $\cos\delta_1 = \cos\delta_2 = 0$ and $\sin\delta_1 = -\sin\delta_2 = 1$. We call these right angles.
$\widehat{uv}$ is a right angle iff $\langle u,v\rangle = 0$
The text continues proving things like these. My question is whether anyone has seen this, or a similar extensive treatment. Also though, I'm interested in finding any text that formally links the primitive angles, sine and cosine from geometry to the sine and cosine we now all know and love from calculus, or even complex analysis, preferably with geometry as a starting point.
Notes:
- Obviously there are well definedness issues to address. It seems this is left to the reader.
- Should I post this on mathoverflow? I've never used it but something tells me a bibliographic inquiry like this one could fit.
- Unfortunately, angles are now abstract objects, and we haven't at all defined the sine or cosine of a real number, so I'm thinking of a way to map real numbers to angles. Any comment on this would be appreciated!
Yes. In fact, $\operatorname{SO}(2)$ is isomorphic to $S^1$ (the circle) as a subset of $\mathbb{C}$. An amusing way of seeing this is to consider all the $2\times 2$ matrices of the form $$Z:=\begin{bmatrix}a&-b\\b&a\end{bmatrix}$$ for any choice of $a,b\in\mathbb{R}$. Exercise: check that the set of all such matrices is closed under addition and matrix multiplication and thus forms a so-called algebra, whose only singular element is $(0,0)$. In fact this algebra is isomorphic to $\mathbb{C}$. Now you will recall from basic complex arithmetic that any complex number $z=a+ib$ can be written as $z=\rho e^{i\theta}$ where $\rho^2=a^2+b^2$, $\rho>0$ and $\cos\theta=a/\rho$, $\sin\theta=b/\rho$, $\theta\in[0,2\pi)$ are uniquely determined, but there is always something upsetting about this approach. Think instead of the complex number $z$ as a matrix $Z$ of the above form, then $\rho^2=\det Z$ and $\exp(\theta J)=Z/\rho$ where the $\exp$ function of matrices is defined as a power series (of matrices) $$\exp(M):=I+M+\frac{M^2}2+\frac{M^3}{3!}+\dotsb$$ and $$J=\begin{bmatrix}0&-1\\1&0\end{bmatrix}\text{ hence } Z=\sqrt{\det Z}\exp(\theta J).$$ If you don't like that exponential, just think of it as $$Z=\sqrt{\det Z} E\text{ for some matrix }E.$$
Now thinking geometrically, a matrix $Z$ of the above form represents a linear transformation in the plane and $\det Z$ is the corresponding transformation of area, so $\sqrt{\det Z}$ is the change in length, provided length changes uniformly in all directions (isotropic). But isotropic change occurs if and only if angles (or their cosines) are preserved by the linear transformation... and they are, take two vectors $u,v\in\mathbb R^2$ and check that $u^*Z^*Zv=\det Z\,u^*v$. So in fact, the matrices $Z$ are nothing but a stretching by $\sqrt{\det Z}$ combined with a rotation $E:=Z/\sqrt{\det{Z}}$. (At this point you see that this rotation $E$ corresponds to one of angle $\theta$, but you don't need to think of $\theta$ yet.) If you impose the extra constraint that $\det Z=1$, the transformations $Z$ describe all possible rotations in the plane. The set of matrices of the form of $Z$ with $\det Z=1$ is known as $\operatorname{SO}(2)$ and this shows that it is isomorphic to the circle $S^1$.
If you prefer complex analysis, you may introduce the so called unitary group of order $1$, $\operatorname{U}(1)$ by taking all complex numbers $z$ of unit length $|z|=1$. But as we've seen above this is nothing but $\operatorname{SO}(2)$ viewed from a different angle (pun unintended).
I'm not sure this clarifies things, but I find it fascinating to introduce complex numbers in this geometric way, as orientation-preserving linear and conformal (OPLAC) mappings, i.e., as plane transformations (mappings) that preserve orientation, straight lines (linear) and angles (conformal), rather than the mechanical $i^2=-1$ algebraic way.
To finish, let's revisit the matrix exponential function. If $L$ is an OPLAC map let's look at $\exp L$. First we may decompose \begin{equation} L=\begin{bmatrix}m&-n\\n&m\end{bmatrix} =\begin{bmatrix}m&0\\0&m\end{bmatrix}+\begin{bmatrix}0&-n\\n&0\end{bmatrix} =mI+nJ,\text{ with $J$ as above}. \end{equation} Since $I$ and $J$ commute (check) it follows that \begin{equation} \exp L=\exp(mI)\exp(nJ)=e^m\exp(nJ). \end{equation} Now given another OPLAC map $Z$ one may ask the question of whether it is possible to find $L$ such that $Z=\exp L$. If $Z=0$, no chance. But if $Z\neq0$ it is sufficient to take $m=\log\sqrt{\det Z}$ and $n$ such that $\exp(nJ)=Z/\sqrt{\det Z}$. Working out the exponential we see Euler's formula in matrix form: \begin{equation} \exp(nJ) = I+nJ+\frac{n^2J^2}2+\frac{n^3J^3}{3!}+\frac{n^4J^4}{4!}+\dotsb \\ = I+nJ-\frac{n^2I}2-\frac{n^3J}{3!}+\frac{n^4I}{4!}+\dotsb \\ = (\cos n)I+(\sin n)J . \end{equation} Again this connects to the rotation and the trigonometry, but by using the cosine and sine as power series.