Assume $V$ is a finite-dimensional vector space over $\mathbb{R}$, and that $T: V \to V$ is a (linear) isomorphism.
When is it possible to construct an inner product on $V$ making $T$ an isometry?
(Hopefully, I am looking for necessary & sufficient conditions $T$ should satisfy, i.e. a full characterization of the situation).
What I have so far:
A necessary condition: all the real eigenvalues of $T$ are of absolute value $1$. (Since $ T(v)=\lambda v \Rightarrow \langle v,v\rangle=\langle Tv,Tv\rangle = \langle \lambda v, \lambda v\rangle = \lambda^2\langle v, v\rangle$ and an eigenvector $v$ must be nonzero.)
This condition is certainly not sufficient:
For example look at $A$ = $\begin{pmatrix} 1 & 1 \\\ 0 & 1 \end{pmatrix}: \mathbb{R}^2 \to \mathbb{R}^2$. It is an automorphism which has only one eigenvalue ($\lambda = 1$). However, $A\begin{pmatrix} x \\ y \end{pmatrix}= \begin{pmatrix} x+y \\ y \end{pmatrix}$, hence $A^n\begin{pmatrix} x \\ y \end{pmatrix}= \begin{pmatrix} x+ny \\ y \end{pmatrix}$ and the requirement $A:(\mathbb{R}^2,\langle \rangle) \to (\mathbb{R}^2,\langle \rangle) $ to be an isometry for some inner product $\langle \rangle$ implies: $\lVert \begin{pmatrix} x \\ y \end{pmatrix}\rVert^2=\lVert A^n\begin{pmatrix} x \\ y \end{pmatrix}\rVert^2\Rightarrow x^2 \lVert e_1\rVert^2+y^2 \lVert e_2\rVert^2+2xy\langle e_1,e_2\rangle = (x+ny)^2 \lVert e_1\rVert^2+y^2 \lVert e_2\rVert^2+2y(x+ny) \langle e_1,e_2\rangle \Rightarrow 0=(2nxy+n^2y^2)\lVert e_1\rVert^2+2ny^2 \langle e_1,e_2\rangle$.
So we get that $0=(2xy+ny^2)\lVert e_1\rVert^2+2y^2 \langle e_1,e_2\rangle$ for any $x,y\in \mathbb{R}, n\in \mathbb{N}$ which is a contradiction since $\lVert e_1 \rVert > 0$.
Some sufficient conditions:
1) If $T$ is diagonalizable over $\mathbb {R}$ (with all eigenvalues $1$ or $-1$, by our necessary condition), then let ${V_1,...,V_n}$ be a basis of eigenvectors of $T$ , and define $\langle v_i,v_j\rangle = \delta_{ij}$. $T$ will be an isometry.
This condition is certainly not necessary: just take a rotation (say by $90^{\circ}$) in the plane. note that it is diagonalizable over $\mathbb{C}$. My guess is that if our transformation is diagonalizable over $\mathbb{C}$ (with all eigenvalues with absolute value 1) a similar construction like the above will work. One problem I see with this approach is that an odd-dimensional $\mathbb{R}$-vector space cannot even be considered as a $\mathbb{C}$-vector space. (Though we can always complexify...).
2) $T$ is of finite order. (Then we just start with any inner product on $V$ and construct a new one via summing over iterates of $T$, i.e: $\langle v,w \rangle ' = \sum_{i=0}^{n-1} \langle T^iv,T^iw \rangle $). Note that (as explained for instance here) this implies $T$ is diagonalizable over $\mathbb{C}$, but of course not necessarily over $\mathbb{R}$. (Think about our rotation again.)
Actually, I have now understood that condition (1) implies $T$ is of order 2, (I think the reverse implication also holds, i.e $T^2=Id\Rightarrow T$ is diagonalizable). So condition (1) is a particular case of (2).
However, (2) is not necessary, since any rotation of irrational multiple of 2$\pi$ is an isometry w.r.t the standard product, but of infinite order.
I somehow think the right way to handle this question is to think over $\mathbb{C}$, but I am not sure how to do this.
Here is another answer which I think in some sense is much better, in some not.
Summary: we reduce to the usual case of finding a 'nice' form for orthogonal matrices.
Assume there exists an inner product $\langle \rangle$ on $V$ making $T$ an isometry. Then there exists an orthonormal basis $B=(v_1,...,v_n)$. Now look at the representing matrix of our automorphism $T$ w.r.t to $B$: $A=[T]_B$. Then $[Tv]_B=[T]_B[v]_B$, where $[u]_B$ is the coordiante vector of $u\in V$ w.r.t $B$.
It clearly holds $\langle Tv_i,Tv_j \rangle = \langle v_i,v_j \rangle= \delta_{ij}$. Now note that bilinearity of the inner product implies: $\langle Tv_i,Tv_j \rangle = \langle [Tv_i]_B,[Tv_j]_B \rangle_{Euclidean}= \langle [T]_B[v_i]_B,[T]_B[v_j]_B \rangle_{Euclidean} = \langle Ae_i,Ae_j \rangle_{Euclidean} = \langle A_{i\downarrow} ,A_{j\downarrow} \rangle_{Euclidean}$.
So finally we get: $\langle A_{i\downarrow} ,A_{j\downarrow} \rangle_{Euclidean}=\delta_{ij}$, so the columns of $A$ form an orthonormal basis for $\mathbb{R}^n$, hence $A=[T]_B$ is an orthogonal matrix.
Now by the real canonical form of an orthogonal matrix $A$ is similar over $\mathbb{R}$ to a matrix with simple real jordan blocks as required, or equivalently diagonalizable over $\mathbb{C}$. That means there exists a basis that w.r.t to it $T$ has the canonical form.
I would still like to find a more geometric\conceptual reaosn for why non simple jordan blocks can never preserve an inner product. (My direct proof was quite computational). Ofcourse we can resort to the uniqueness argument of Jordan form, (We showed any admissible transformation has a simple Jordan form, and the Jordan form is unique up to the order of the blocks, and that's it).