Let $V = \mathbb{R}^n$ and $A\in\mathbb{M}_{n\times n}(\mathbb{R})$ show that $\left \langle A^tx, y \right \rangle = \left \langle x, Ay \right \rangle$.
I managed to demonstrate the exercise in the case of the standard inner product but I don't know how to do it for any inner product.
Is there an easy way to do it without using adjoint transformations?
This fails in general if you don't take the standard inner product. For example, take $\langle x,y\rangle := x_1y_1 + 2x_2y_2$ on $\Bbb R^2$ and take $$ A := \begin{bmatrix} 0 & 2 \\ 1 & 0\end{bmatrix} $$ then $\langle A^Te_1,e_2\rangle = \langle 2e_2,e_2\rangle = 4$ but $\langle e_1,Ae_2\rangle = \langle e_1,2e_1\rangle = 2$.
Edit: Just to clarify what you might have been looking for instead, given a Hilbert space $H$ (which for these purposes are nice inner product spaces such as $\Bbb R^n$ with the standard inner product) and a bounded linear operator $T:H\to H$ (which in the finite-dimensional setting, this is any linear transformation), its adjoint operator is the linear operator $T^*:H\to H$ characterised by the property that $$ \langle Tx,y\rangle = \langle x,T^*y\rangle $$ which is then just the definition of the adjoint operator (since in the case where $T$ is bounded, an adjoint always exists and is unique). In $\Bbb R^n$ with the standard inner product, what you have shown is that the adjoint operator is given by taking the transpose matrix, but as the above example shows, the adjoint operator necessarily depends also on the inner product in question.