From Serge Lang's Linear Algebra:
Define a rotation of $V$ to be a real unitary map $A$ of $V$ whose determinant is $1$. Show that the matrix of $A$ relative to an orthogonal basis of $V$ is of type:
$$\begin{pmatrix} a & -b \\ b & a \end{pmatrix}$$
for some real numbers $a, b$ such that $a^2 + b^2 = 1$.
...
Instant thought I had about the problem was the very simple definition of determinant for any $2 \times 2$ $abcd$ matrix:
$$Det(\begin{pmatrix} a & b \\ c & d \end{pmatrix})= ad-bc$$
If $ad-bc=1$, then one could use $ad=1+bc$ and $bc=ad - 1$ for the proof.
From the question it seems that $A$ is a unitary operator $A: V \rightarrow V$ where $V$ is a vector space over $\mathbb{R}$. If $\{v_1, v_2\}$ is one of the orthogonal basis for the vector space $V$, then obviously $\langle Av_i, Av_j \rangle = \langle AA^Tv_i, v_j \rangle = \langle v_i, v_j \rangle$ for $\{i, j\} = \{1, 2\}$.
Solution
After these very basic deductions, I did not know how to proceed, thus I referred to Solutions Manual for Lang's Linear Algebra by Rami Shakarchi:
Let $\langle v_1, v_2 \rangle$ be an orthogonal basis for $V$. Let $w_i = Av_i$ and
$$w_1 = av_1 + bv_2$$
$$w_2 = cv_1 + dv_2$$
The matrix representing $V$ in the chosen basis is:
$$\begin{pmatrix} a & c \\ b & d \end{pmatrix}$$
Then, since $\langle Av_i, Av_i \rangle = \langle v_i, v_i \rangle$, we have equations (*):
$$(a^2 - 1)\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle = 0\\ c^2\langle v_1, v_1 \rangle + (d^2-1) \langle v_2, v_2 \rangle = 0$$
But $dw_1 - bw_2 = (ad-bc)v_1 = v_1$, so:
$$\langle v_1, v_1 \rangle = \langle A(dv_1 - bv_2), A(dv_1 - bv_2) \rangle = d^2\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle$$
thus (*) implies $a^2 = d^2$ and $b^2 = c^2$. Moreover,
$$ 0 =\langle v_1, v_2 \rangle = \langle Av_1, Av_2 \rangle = ac\langle v_1, v_1 \rangle + bd \langle v_2, v_2 \rangle$$
so $ac$ and $bd$ are of opposite signs and therefore the matrix of $A$ has the desired form.
Concerns about the proof above:
The proof above makes some of the initial deductions that I have made, but some parts are ostensibly obscure to me:
How were the equations for $w_1$ and $w_2$ derived? Author of the proof mentions that $w_i = Av_i$, and thus it seems that the equations are derived from matrix multiplication of $A$ and $v_i$, but the equations make it seem that $v_1$ and $v_2$ are vector components instead of vectors themselves (and they should be vectors because they are contained in basis).
The author of the proof mentions the "matrix representing $V$ in chosen basis" and shows a certain $acbd$ matrix, what exactly is this matrix? Does it contain coordinate vectors associated with basis as each column or is it an arbitrary representation of rotation matrix that should be deduced to have a certain form?
Author of the proof deduces two homogenous equations (*) from the fact that $\langle Av_i, Av_i \rangle = \langle v_i, v_i \rangle$. How exactly are these equations deduced? I understand that for example if $a^2 + b^2 = 1$, then $-b^2 = a^2 -1$, but exactly why is $(a^2 - 1)\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle = b^2 \langle v_2, v_2 \rangle - b^2\langle v_1, v_1 \rangle = 0$?
Question
Is there any explicit proof that shows that a real unitary map with determinant has a certain matrix form (as seen above) relative to orthogonal basis?
If the solution above is a "standard" rigorous proof, is it possible to expand it more comprehensively, addressing the obscurities mentioned above?
Thank you!
The proof is rigorous and sound. I'll try to explain a bit better to you.
$v_1,v_2$ is a generic orthogonal basis for $V$. Using the $\langle\rangle$ parenthesis here is misleading, since later they will be used to indicate the scalar product. In fact, the orthogonal condition can be rewritten as $$\langle v_1, v_2 \rangle=0$$
Here $w_i$ are defined starting from $v_i$.
Here the author is using the fact that $v_1,v_2$ is a basis, so there exists coefficients $a,b,c,d$ such that $w_1$ is a linear combination of $v_1,v_2$ and the same holds for $w_2$.
Since you know how $A$ acts on a basis, you can write down its entries. In particular, if $M=(v_1|v_2)$ and $N=(w_1|w_2)=(Av_1|Av_2)$, writing $A$ in the chosen basis means finding the 4 entries of $A$ such that $$N = MA$$
Here the author is using the linearity of scalar product, and the orthogonality of the basis. In fact $$ \langle Av_1, Av_1 \rangle = \langle w_1, w_1 \rangle = \langle av_1 + bv_2, av_1 + bv_2 \rangle =$$ $$ a^2\langle v_1, v_1 \rangle + ab \langle v_1, v_2 \rangle + ab \langle v_2, v_1 \rangle + b^2 \langle v_2, v_2 \rangle = a^2\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle $$ so $$ \langle Av_1, Av_1 \rangle = \langle v_1, v_1 \rangle \implies a^2\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle = \langle v_1, v_1 \rangle $$ $$ \implies (a^2-1)\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle =0 $$ and analogously for the second relation.
Here he is just substituting $$w_1 = av_1 + bv_2$$ $$w_2 = cv_1 + dv_2$$ and using that the determinant of $A$ is $1=ad-bc$
Again, he is using linearity of the scalar product, orthogonality of the basis and $\langle Av, Aw \rangle = \langle v, w \rangle$ for every vector $v,w$ (it is the definition of orthogonal matrix). Here are the missing passages $$\langle v_1, v_1 \rangle = \langle dw_1 - bw_2,dw_1 - bw_2\rangle = \langle dAv_1 - bAv_2,dAv_1 - bAv_2\rangle =$$ $$ \langle A(dv_1 - bv_2), A(dv_1 - bv_2) \rangle = \langle dv_1 - bv_2, dv_1 - bv_2 \rangle = $$ $$ d^2\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle+ db \langle v_1, v_2 \rangle + db \langle v_2, v_1 \rangle = d^2\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle $$
$$0 =(a^2 - 1)\langle v_1, v_1 \rangle + b^2 \langle v_2, v_2 \rangle = (a^2 - 1)\langle v_1, v_1 \rangle + \langle v_1, v_1 \rangle - d^2\langle v_1, v_1 \rangle =a^2 \langle v_1, v_1 \rangle - d^2\langle v_1, v_1 \rangle \implies a^2=d^2 $$ and similarly for the other relation.
Again, linearity and orthogonality.
Using that $\langle v_i, v_i \rangle>0$ by definition of scalar product.