The subgroup $O_n=\{M\in GL_n(\mathbb{R}) | ^tM M = I_n\}$ is closed in $GL_n(\mathbb{R})$ because it's the inverse image of the closed set $\{I_n\}$ by the continuous map $X\mapsto ^tX X$. $O_n$ is also bounded in $GL_n(\mathbb{R})$, for example this is clear by considering the norm $||X|| = \sqrt{tr(^tX X)}$ (elements of $O_n$ are bounded by $\sqrt{n}$), so $O_n$ should be a compact subgroup of $GL_n(\mathbb{R})$.
I see it claimed without proof in many places that $O_n$ is a maximal compact subgroup. How can we see this?
Recall the $QR$ decomposition - any invertible $n\times n$ matrix $A$ can be decomposed uniquely as $A=QR$ with $Q$ orthogonal, and $R$ upper triangular with positive entries on the diagonal.
Now, pick any $A\in Gl_n(\mathbb{R})$ which is not orthogonal and consider the group $G$ generated by $O(n)$ and $A$. We will show this group is not compact.
First, writing $A = QR$, it follows that $R\in G$ and $R\notin O(n)$.
Now, look at the diagonal elements of $R$. If some element of the diagonal is larger than $1$, then in sucessive powers of $R$, this element goes to infinity, so there cannot be a convergent subsequence, so $G$ wouldn't be compact. Hence, we may assume every element on the diagonal is less than or equal to $1$. Then by making the same argument on $R^{-1}$, we see that every element on the diagonal must be $1$.
Because $R\notin O(n)$, there must be some off diagonal element $R_{ij}$ (with $i < j$) which is nonzero. Choose $R_{ij}\neq 0$ with $ij$ as close to the diagonal as possible. More precisely, choose $R_{ij}$ with the property that $R_{ij}\neq 0$ and if $k+l < i+j$ with $k\neq l$, then $R_{kl} = 0$.
I claim that $(R^n)_{ij} = nR_{ij}\rightarrow \pm \infty$, so there cannot be a convergent subsequence. If we can verify the claim, it then follows that $G$ is not compact.
The proof is by induction. The case $n=1$ occurs for free. Now, assume $(R^{n-1})_{ij} = (n-1)r_{ij}$ and $(R^{n-1})_{kl} = 0$ whenever $k\neq l$ and $k+l < i+j$
Then \begin{align*} (R^{n-1} R)_{ij} &= \sum_k R^{n-1}_{ik} R_{kj} \\ &= \sum_{i\leq k\leq j} R^{n-1}_{ik}R_{kj}\\ &= R^{n-1}_{ii}R_{ij} + R^{n-1}_{ij}R_{jj} + \sum_{i < k < j}R^{n-1}_{ik}R_{kj}\\ &= nR_{ij}. \end{align*}
The first equality is the definition of matrix multiplication, the second uses the fact that powers of an upper triangular matrix remain upper triangular. The third just expands the sum a bit and the fourth follows from the inductive hypothesis, since $i+k < i+j$ when $k < j$.
To finish off the induction, we need only prove that $(R^n)_{kl}=0$ whenever $k\neq l$ and $k+l < i + j$. But if you repeat the same calculation as above, this follows.