Arbitrary (i.e. not necessarily finite-dimensional) vector spaces; reference request.

949 Views Asked by At

Its virtually impossible to complete an undergraduate degree these days without studying finite-dimensional vector spaces in quite some detail. So like most of us, I've done all that; however, just for the sake of completeness, I'd like to consider arbitrary vector spaces for once (not just the finite dimensional ones). Now when I say arbitrary vector spaces, I don't mean Hilbert spaces, or Banach spaces, nor even topological vector spaces; I do really mean vector spaces, plain and simple.

Is there a good book or article anyone can recommend that deals with arbitrary (i.e. not necessarily finite-dimensional) vector spaces, as well as whatever remnant of linear algebra still makes sense in this context?

2

There are 2 best solutions below

5
On BEST ANSWER

I don't know of any particular reference which does what you want. The reason is that it's not that hard to figure these things out yourself (at least with enough experience that is). Here are a few of the differences:

The proof of existence of dimension is considerably more involved for non-finite dimensional vector spaces. It is also hard, and often impossible, to exhibit an actual basis. Consequently, the use of bases is not that common. Related is that representing a linear transformation by means of matrices (of course we mean here $\kappa \times \lambda $ matrices for any cardinalities). The problem is not with the space of matrices, it's just that it is rather pointless to care about representing matrices when you can't really find a basis. In most cases you won't be able to represent any linear transformation.

In finite dimensional vectors spaces, a linear transformation $T:V\to W$, when both spaces have equal dimension, is injective iff surjective iff injective. This is not the case in the infinite dimensional case. Related is that an endomorphism of an infinite dimensional vector space can have a left (resp. right) inverse without being invertible (which is impossible in the finite dimensional case).

The familiar property that if $V$ is a subspace of $W$ and they have the same dimension, then they are equal, which holds for finite dimensional spaces, fails for infinite dimensional ones.

The theory of eigenvalues becomes much more complicated in the infinite dimensional case, superficially simply since you may very well deal with infinitely many eigenvalues and independent eigenvectors for a single linear transformation.

I hope this helps you orient yourself a bit better.

0
On

Some of the comments seem to imply that the concept of a basis is somehow hard to define for infinite-dimensional spaces. That's not the case. The definition

If $V$ is a vector space over the scalar field $K$, then a family $B=(b_i)_{i\in d}$ of vectors is a basis of $V$ exactly if for every vector $x$ there is a unique family $(x_i)_{i\in d}$ of scalars with $x_i \neq 0$ for only finitely many $i$, such that $x = \sum_{i \in d} c_ix_i$. The family $(x_i)_{i\in d} =: \mathfrak{C}_B(x)$ is called the coordinatization of $x$ in basis $B$.

works for vector spaces of arbitrary dimension. Note that the crucial point is that every vector must be representable by a finite linear combination of basis vectors. Most property carry over from the finite-dimensional case - in particular, you still have that

If $T \,:\, V \to V$ is a linear map, and $(b_i)_{i\in d}$ a basis of $V$, then $T$ is fully determined by the images of the $b_i$ under $T$, i.e. by $(Tb_i)_{i\in d}$.

It follows that you can also generalize matrices, by allowing arbitrarily large index sets, and (just as in the finite-dimensional case), require the "columns" to be the images of the basis vectors.

If $T \,:\, V \to V$ is a linear map, and $B$ a basis of $V$, then the family of scalar $(a_{ij})_{i,j\in d} =: \mathfrak{C}_B(T)$ were $$ (a_{ij})_{i \in d} = \mathfrak{C}_B(Tb_j) \text{ for all $j \in d$,} $$ i.e. where $(a_{i,j})_{i\in d}$ for a fixed $j$ is the coordinatization of the image of $b_j$ under $T$, is called the coordinatization of $T$ in $B$ or matrix of $T$ in $B$. One may call the family $(a_{ij})_{i \in d}$ for a fixed $j$ the $j$-th column (of $(a_{ij})_{i,j\in d}$), and the family $(a_{ij})_{j \in d}$ for a fixed $i$ the $i$-th row. Every coordinatization has the property that each column contains only finitely non-zero entries.

Just as in the finite-dimensional case, you can then define $A\cdot x$ for a pair of coordinatizations $A=(a_{ij})_{i,j\in d}$ and $x=(x_j)_{j\in d}$, by setting

$A\cdot x := (y_i)_{i\in d}$, where $y_i = \sum_{j \in d} a_{ij} x_j$.

Since $x_i \neq 0$ only finitely many times, it's clear that the sum always exists. So the question remains, is it alwas a valid coordinatization, i.e. is $y_i \neq 0$ also only finitely many times? One can restrict the attention to those $n$ columns of $A$ which correspond to non-zero $x_j$. Since each columns of a coordinatization $\mathfrak{C}_B(T)$ contains onyl finitely many non-zero entries, say $m_1,\ldots,m_n$ for the $n$ columns of interest, it follows that $y_i$ contains at most $m_1+\ldots+m_n$ non-zero entries.

You also get a product of matrices $A\cdot B$, by collecing the products of $A\cdot x$ as $x$ ranges over the columns of $B$, i.e. you have

$A\cdot B := (c_{ij})_{i,j\in d}$ where $(c_{i,j})_{i \in d} = A\cdot (b_{ij})_{i \in d}$ for all $j \in d$, i.e. the $j$-th column of $A\cdot B$ is $A$ times the $j$-th column of $B$.