I am almost sure this question has been asked before but I had a long look and its possible I lack the language to describe my question to the search box properly.
Assume we are working with real numbers. If we call a "simple" rotation one represented by a matrix "$R$" that is an identity matrix except for 4 entries defined by indices x and y. Where these changed entries $r_{ij}$ can be represented by: $$ r_{xx}=r_{yy}=cos(\theta) $$ $$ r_{xy}=-sin(\theta) $$ $$ r_{yx}=-r_{xy}=sin(\theta) $$
For example this matrix: $$ \begin{bmatrix} 1& 0& 0& 0& 0\\ 0& cos(\theta)& 0& -sin(\theta)& 0\\ 0& 0& 1& 0& 0\\ 0& sin(\theta)& 0& cos(\theta)& 0\\ 0& 0& 0& 0& 1 \end{bmatrix} $$
Which rotates the plane spanned by $e_2$ and $e_4$ by theta.
I have two questions:
Does this notion of "simple rotations" have a proper name?
My main question, if one has a rotation in 1 arbitrary plane in n-dimensions spanned by non-basis vectors is it possible, and more importantly always possible, to decompose that as a combination of these simple rotations? If so is there an algorithmic way to do this and does it have a name?
For bonus points, if there's anything I should know about how Complex co-ordinates or Complex theta behave in this context I would be happy to hear about it.
These rotations are called Givens rotations, and every rotation can be decomposed into Givens rotations. Think of an $n \times n$ orthogonal matrix in terms of its columns $v_1, \dots v_n$, which form an orthonormal basis. Multiplying such an orthogonal matrix by a Givens rotation on the left has the effect of applying that rotation to each of the vectors $v_i$. Our goal will be to "straighten out" this basis by repeatedly applying Givens rotations until it's the standard basis $e_1, \dots e_n$ of $\mathbb{R}^n$.
A Givens rotation allows us to rotate in any coordinate plane, so we can argue as follows. Write $v_1 = (v_{11}, v_{12}, ...)$. First, by rotating $90^{\circ}$ in a coordinate plane we can swap any two entries up to sign, $(x, y) \mapsto (-y, x)$. So swap any nonzero entry into the first coordinate, so that $v_{11} \neq 0$. Next, by an appropriate rotation in the $e_i, e_j$-coordinate plane, if $v_{1i}, v_{1j}$ are both nonzero we can rotate so that $v_{1j} = 0$. So rotate in the $e_1, e_j$-coordinate plane for any $j$ such that $v_{1j}$ is nonzero until all entries other than $v_{11}$ are equal to zero. At the end of this process we have $v_1 = \pm e_1$ (and if $v_1 = -e_1$ we can arrange $v_1 = e_1$ by a final $180^{\circ}$ rotation), and $v_2, \dots v_n$ must be orthogonal to it so are contained in the copy of $\mathbb{R}^{n-1}$ spanned by $e_2, \dots e_{n-1}$ (in matrix terms, our original orthogonal matrix is now a block matrix). Now we can induct on $n$.
At the very last step we may get $v_n = -e_n$ rather than $v_n = e_n$ but this could only happen if our original matrix was a reflection rather than a rotation.