I am trying to grasp the concept of Gram-Schmidt process and I have encountered the following logical difficulty:
Given a set of $n$ independent vectors, applying GS algorithm upon this set would produce an orthonormal basis that spans the same vector space as the $n$ independent vectors span - I understand this.
It seems to me, that if we take a full-rank square matrix, then in that case the column vectors span the entire $n$-th dimensional space. So in that case I was thinking that using Gram-Schmidt is not helpful since there is a trivial orthonormal basis for the vector space, which is the column/row space of the identity matrix of the same dimension.
I couldn't find a justification for this wondering anywhere online so my guess is that I am missing something. In addition, I found examples of solutions to exercises in which 3 independent vectors of 3 entries were given and the entire GS algorithm has been used to find an orthonormal basis - And that contradicts the reasoning I have explained in pargraph 2.
What am I missing?
Thanks!
Gram-Schmidt not only supplies an orthonormal basis, it can make sure that the first vector in the basis is a particular vector. With regard to many linear transformations (e.g. rotations, projections), some orthonormal coordinates are nicer than others.