Please confirm/correct my reasoning:
Given an inner product defined for a vector space, the inner product can be computed using vectors such as functions, polynomials, vectors in R^n. The inner product can be an integral, dot product, etc, depending on the vector space.
Inner products can also be computed using the coordinates of vectors through the Gram Matrix. Either method of computation results in the same answer.
If the basis for the vector space is orthonormal, then any inner product of vectors can be computed using the dot product of coordinates of the vectors.
The usual definition of dot product in R^n only works if the basis is orthonormal, and if the basis is not orthonormal then the Gram Matrix must be used (since it does not equal the identity matrix).
Similar remark can be made for Euclidean Norm. The standard definition of Euclidean norm only works for orthonormal basis but not for non-orthonormal basis. Not sure how to calculate the norm in latter case.
Reason for asking:
I get tripped up while using the usual definition of dot product in R^n.
Usual definition: [x1 ... xi] * [y1 ... yi]' where xi, yi are the coordinates of the vectors x, y.
Using orthonormal basis vs non-orthonormal basis yields different answers.
It seems that the above definition works if you know what x and y are (in the absolute sense).
For example, if I know x = [1 2 3] and y = [1 1 1] then dot product = 6. Then if I represent the vectors in the standard (unit vector) basis then dot product = 6. Lastly if I represent the vectors in a non-orthonormal basis and use the Gram matrix correctly I get dot product = 6.
Basically definitions of inner products and norms such as dot product and Euclidean norm are given in terms of coordinates of vectors, and sometimes the answers don't match depending on the basis of the vector space, and I am trying to rationalize why.