If we look at an $n$ - dimensional vector space $V$ and a linear transformation \begin{equation} T : V \to V, \quad x \mapsto Tx \quad \forall \, x \in V \end{equation} then given a choice of basis for $V$ one can represent $T$ in terms of a $n \times n$ matrix $A_T = A_T(i,j)$
Is this also the case for linear transformations on infinite - dimensional vector spaces, where we replace the matrix $A_T$ by an integral ?
In particular, since differentiation is a linear map, that would mean differentiation can be written in the form of an integral ... I realize this is either a dumb question (because it is obviously wrong) or it is some classic result I haven't found yet. In both cases it would be great to get some reference where I can learn more about it, many thanks!
The axiom of choice is equivalent to the assertion that any vector space (no matter how large) has a basis. Given a choice of basis of an arbitrary vector space $V$, one can represent a linear transformation $T : V \to V$ using an infinite row-finite matrix (this means that only finitely many entries in each row are nonzero).
However, in practice this is not useful, since most infinite-dimensional vector spaces (such as the space of smooth functions on $\mathbb{R}$, for example) are huge and don't have reasonable bases. To study these, we instead introduce a topology, and then we do functional analysis instead of linear algebra.
Three additional comments: