Its virtually impossible to complete an undergraduate degree these days without studying finite-dimensional vector spaces in quite some detail. So like most of us, I've done all that; however, just for the sake of completeness, I'd like to consider arbitrary vector spaces for once (not just the finite dimensional ones). Now when I say arbitrary vector spaces, I don't mean Hilbert spaces, or Banach spaces, nor even topological vector spaces; I do really mean vector spaces, plain and simple.
Is there a good book or article anyone can recommend that deals with arbitrary (i.e. not necessarily finite-dimensional) vector spaces, as well as whatever remnant of linear algebra still makes sense in this context?
I don't know of any particular reference which does what you want. The reason is that it's not that hard to figure these things out yourself (at least with enough experience that is). Here are a few of the differences:
The proof of existence of dimension is considerably more involved for non-finite dimensional vector spaces. It is also hard, and often impossible, to exhibit an actual basis. Consequently, the use of bases is not that common. Related is that representing a linear transformation by means of matrices (of course we mean here $\kappa \times \lambda $ matrices for any cardinalities). The problem is not with the space of matrices, it's just that it is rather pointless to care about representing matrices when you can't really find a basis. In most cases you won't be able to represent any linear transformation.
In finite dimensional vectors spaces, a linear transformation $T:V\to W$, when both spaces have equal dimension, is injective iff surjective iff injective. This is not the case in the infinite dimensional case. Related is that an endomorphism of an infinite dimensional vector space can have a left (resp. right) inverse without being invertible (which is impossible in the finite dimensional case).
The familiar property that if $V$ is a subspace of $W$ and they have the same dimension, then they are equal, which holds for finite dimensional spaces, fails for infinite dimensional ones.
The theory of eigenvalues becomes much more complicated in the infinite dimensional case, superficially simply since you may very well deal with infinitely many eigenvalues and independent eigenvectors for a single linear transformation.
I hope this helps you orient yourself a bit better.