I am a computer science research student working in application of Machine Learning to solve Computer Vision problems.
Since, lot of linear algebra(eigenvalues, SVD etc.) comes up when reading Machine Learning/Vision literature, I decided to take a linear algebra course this semester.
Much to my surprise, the course didn't look at all like Gilbert Strang's Applied Linear algebra(on OCW) I had started taking earlier. The course textbook is Linear Algebra by Hoffman and Kunze. We started with concepts of Abstract algebra like groups, fields, rings, isomorphism, quotient groups etc. And then moved on to study "theoretical" linear algebra over finite fields, where we cover proofs for important theorms/lemmas in the following topics:
Vector spaces, linear span, linear independence, existence of basis. Linear transformations. Solutions of linear equations, row reduced echelon form, complete echelon form,rank. Minimal polynomial of a linear transformation. Jordan canonical form. Determinants. Characteristic polynomial, eigenvalues and eigenvectors. Inner product space. Gram Schmidt orthogonalization. Unitary and Hermitian transformations. Diagonalization of Hermitian transformations.
I wanted to understand if there is any significance/application of understanding these proofs in machine learning/computer vision research or should I be better off focusing on the applied Linear Algebra?
From my personal experience, i think the most important topics are Probability, Statistics and Matrix Algebra. Of course, the basics of Linear Algebra are also required, but i guess that goes without saying.
The topics which you mentioned in the course of linear algebra, they can be good to know. For example, there are many unsolved problems in current methods of machine learning. If you have strong foundation of linear algebra, may be you can come up with solution to such existing problems.
Hope this helps.