For any linear operator on a finite dimensional Inner Product Space, we can get orthonormal basis via Gram Schmidt Process.
But what is the necessity of defining the adjoint of the operator using the orthonormal basis?
Probably it helps with the computation. Why we happen to define like that?
The adjoint of an operator depends on the inner product you use; it's not a purely Linear Algebraic concept: $(Ax,y)=(x,A^{\star}y)$. If you represent a linear operator on a finite dimensional inner product space $X$ with respect to an orthonormal basis $\mathscr{B}=\{ e_1,e_2,\cdots,e_N \}$, then the adjoint operator $A^{\star}$ has a matrix representation equal to the conjugate transpose of the representing matrix for $A$. That is, $$ [A^{\star}]_{\mathscr{B}} = (\overline{[A]_{\mathscr{B}})}^{T}=([A]_{\mathscr{B}})^{\star} $$ If you don't use an orthonormal basis, then the matrix for the adjoint $A^{\star}$ is not so easily represented in terms of $[A]_{\mathscr{B}}$.