I have seen a few various rigorous constructions of sine and cosine in real analysis courses. They have usually been done by power series, their unique characterizations through differential equations, and the method using the exponential function like Rudin does. These approaches are favored from an analytic perspective because they can be studied analytically this way. However, I was wondering how one would follow a very detailed construction through means of analytic geometry.
Because of the invariance of the ratios of the sides of similar triangles, and invariance under translations in the plane, it suffices to only consider right triangles emedded inside the unit circle $S^1 = \{(x,y) \in \mathbb{R}^2 \mid x^2 + y^2 = 1\}$. It also suffices to define sine and cosine for the moment on $[0, 2\pi)$ since we can then simply extend it periodically. So we wish to define a bijective map $f:[0,2\pi) \rightarrow S^1$ and have for all $\theta \in [0,2\pi)$, $f(\theta) = (c(\theta), s(\theta))$ where $c$ and $s$ will be the desired principle sine and cosine functions. How exactly can one define $f$? I have seen some that mention angles, but don't rigorously define what an angle is in analytic geometry.
Fix $\theta\in[0,2\pi)$ and let $r_\theta$ be the ray with origin at $(0,0)$ and forming an angle of $\theta$ with respect to $OX^+$, that is, the positive $x$-axis.
Then $f(\theta)$ will be the unique intersection of $r_\theta$ and $S^1$.
Alternatively, you can define $f(\theta)$ as the (unique) point of $S^1$ for which the length of the arc beginning at $(1,0)$ and ending at $f(\theta)$.