An orthogonal $n \times n$ matrix $A$ is called elementary if the corresponding linear transformation $L_A:\mathbb{R} \to \mathbb{R}$ fixes an $(n-2)$ dimensional subspace. Prove that every orthogonal matrix $M$ is a product of at most $(n-1)$ elementary orthogonal matrices.
I see what the question is getting at, but I think i've come up with a rather trivial counterexample as follows. Let $n = 3,$ and $I$ is the identity matrix. $I$ is orthogonal. Define $$A = \begin{bmatrix} 1 &0 &0\\ 0 &0 &1\\ 0 &1 &0\\ \end{bmatrix}, B = \begin{bmatrix} 0 &0 &1\\ 0 &1 &0\\ 1 &0 &0\\ \end{bmatrix}, C = \begin{bmatrix} 0 &1 &0\\ 1 &0 &0\\ 0 &0 &1\\ \end{bmatrix}, $$ Each of these are elementary orthogonal matrices - they each fix a 1D subspace.
Then $$ ABCB = I $$ showing that $I$ is the product of 3 elementary matrices. More trivially, say $I = IABCB,$ then its the product of 4 elementary orthogonal matrices.
I must be misunderstanding the question, but that is the exact wording (on a qualifying exam). Am I misunderstanding? Looking for clarification and a proof would be appreciated. Thanks in advance.
I'm not sure this is the argument they're looking for, but if nothing else it's a start.
First, since $A$ is an orthogonal matrix, one can check that the characteristic polynomial of $A$ factors over $\mathbb{C}$ as $$ p_A(x) = (-1)^n(x-1)^{m_+}(x+1)^{m_-}\prod_{j=1}^M(x-e^{i\theta_j})^{m_j}(x-e^{-i\theta_j})^{m_j} $$ for distinct $\theta_1,\dotsc,\theta_N \in (0,2\pi)$ and suitable multiplicities $m_+,m_-,m_1,\dotsc,m_M \in \mathbb{N} \cup \{0\}$.
Next, since $A$, viewed as a complex matrix, is unitary (and in particular normal), it is unitarily diagonalisable, i.e., $\mathbb{C}^n$ decomposes as a direct sum $$ \mathbb{C}^n = N_{\mathbb{C}}(A-I) \oplus N_{\mathbb{C}}(A+I) \oplus \bigoplus_{j=1}^M \left(N_{\mathbb{C}}(A-e^{i\theta_j}I) \oplus N_{\mathbb{C}}(A-e^{-i\theta_j}I)\right) $$ of pairwise orthogonal distinct eigenspaces, where $N_{\mathbb{C}}(B)$ denotes the nullspace in $\mathbb{C}^n$ of an $n \times n$ complex matrix $B$, and where $$ \dim_\mathbb{C}N_{\mathbb{C}}(A-I) = m_+, \quad \dim_\mathbb{C}N_{\mathbb{C}}(A+I) = m_-, \\ \dim_\mathbb{C}N_{\mathbb{C}}(A-e^{i\theta_j}I) = \dim_\mathbb{C}N_{\mathbb{C}}(A-e^{-i\theta_j}I) = m_j. $$
Next, since $A$ is a real matrix, we have that $N_{\mathbb{C}}(A-e^{-i\theta_j}I) = \overline{N_{\mathbb{C}}(A-e^{i\theta_j}I)}$ for each $j$, which you can use to check that $\mathbb{R}^n$ decomposes as a direct sum $$ \mathbb{R}^n = N_{\mathbb{R}}(A-I) \oplus N_\mathbb{R}(A+I) \oplus \bigoplus_{j=1}^M V_j, $$ where $N_\mathbb{R}(C)$ denotes the nullspace in $\mathbb{R}^n$ of an $n\times n$ real matrix $C$ and where $$ N_\mathbb{R}(A-I) = N_\mathbb{C}(A-I) \cap \mathbb{R}^n, \quad N_\mathbb{R}(A+I) = N_\mathbb{C}(A+I) \cap \mathbb{R}^n,\\ V_j := \left(N_\mathbb{C}(A-e^{i\theta_j}I) \oplus N_\mathbb{C}(A-e^{-i\theta_j}I)\right) \cap \mathbb{R}^n, $$ with $$ \dim_\mathbb{R}N_\mathbb{R}(A-I) = m_+, \quad \dim_\mathbb{R}N_\mathbb{R}(A+I) = m_-,\quad \dim_\mathbb{R}V_j = 2m_j. $$
Now, let $\{e_{+,1},\dotsc,e_{+,m_+}\}$ be your favourite orthonormal ordered basis for $N_\mathbb{R}(A-I)$ and let $\{e_{-,1},\dotsc,e_{-,m_-}\}$ be your favourite orthonormal ordered basis for $N_\mathbb{R}(A+I)$. For each $j$, you can use an orthonormal basis for the complex subspace $N_\mathbb{C}(A-e^{i\theta_j}I)$ of $\mathbb{C}^n$ to construct an orthonormal ordered basis $\{e_{j,1},e_{j,1}^\prime,\dotsc,e_{j,m_j},e_{j,m_j}^\prime\}$ for the real subspace $V_j$ of $\mathbb{R}^n$ with the following property: $$ Ae_{j,k} = \cos(\theta_j)e_{j,k} + \sin(\theta_j)e_{j,k}^\prime, \quad Ae_{j,k}^\prime = -\sin(\theta_j)e_{j,k} + \cos(\theta_j)e_{j,k}^\prime. $$ At last, we get an orthonormal ordered basis $$ \gamma := \{e_{+,1},\dotsc,e_{+,m_+}\} \cup \{e_{-,1},\dotsc,e_{-,m_-}\} \cup \bigcup_{j=1}^M \{e_{j,1},e_{j,1}^\prime,\dotsc,e_{j,m_j},e_{j,m_j}^\prime\} $$ for $\mathbb{R}^n$ such that if $S$ is the orthogonal change of coordinates matrix from $\gamma$ to the standard ordered basis of $\mathbb{R}^n$, i.e., $S$ is the $n \times n$ orthogonal matrix whose columns are the vectors in $\gamma$ in their given order, then $S^{-1}AS$ is the block-diagonal matrix $$ S^{-1}AS = I_{m_+} \oplus (-I_{m_-}) \oplus \bigoplus_{j=1}^M \begin{pmatrix}\cos(\theta_j)&-\sin(\theta_j)\\ \sin(\theta_j) & \cos(\theta_j) \end{pmatrix}^{\oplus m_j}. $$ In other words, if we write $m_+ = 2q_+ + r_+$ and $m_- = 2q_- + r_-$, where $q_+,q_- \in \mathbb{N} \cup \{0\}$ and where $r_+, r_- \in \{0,1\}$, then $S^{-1}AS$ is a block-diagonal matrix formed from $\mu \leq n-1$ blocks, each a $2 \times 2$ or $1 \times 1$ orthogonal matrix, along the diagonal:
Finally, if $B_k$ is the $k$th of these $2 \times 2$ or $1 \times 1$ blocks ordered from top to bottom, if $C_k$ is the matrix obtained from $A$ by keeping the $k$th block intact and replacing the rest by the appropriately sized identity matrix, and if $A_k := SC_kS^{-1}$, then $$ A = \prod_{k=1}^\mu A_k, $$ where $A_k$ is an $n \times n$ orthogonal matrix fixing an $(n-2)$-dimensional subspace of $\mathbb{R}^n$ (if $B_k$ is $2 \times 2$) or an $(n-1)$-dimensional subspace of $\mathbb{R}^n$ (if $B_k$ is $1 \times 1$), i.e., an elementary orthogonal matrix.