While studying for Linear Algebra, I stumpled upon two questions, both formatted the same, yet one being more difficult for me than the other.
Question A Given a finite dimensional linear subspace $V$ with it's basis $\{v_1, \cdots, v_n\}, n \in \mathbb{N} $, and a linear subspace $W$ consisting out of the vectors $w_1, \cdots, w_n$ Proof that there exists are Linear Transformation $L$, such that $L(v_i) = w_i$, for $i = 1, 2, \cdots, n$.
My Answer Because the basis of $V$ is given by $v_1, \cdots, v_n$ we can also represent it as $\mathbb{I}_n$ in matrixform. We can describe any linear transformation with a matrix multiplication. Let's describe $L$ with $L_{mat}$ If we describe al vectors $w_1, \cdots, w_n$ in matrix $W_{mat}$, the following equation will hold:
$L_{mat} \cdot \mathbb{I}_n = W_{mat}$, therefore: $L_{mat} = W_{mat} * \mathbb{I}_n^{-1}$, so $L_{mat} = W_{mat}$. Because $W_{mat}$ exists, $L_{mat}$ will too.
Question B Assuming this proof is correct, how to prove this for infinite sized linear subspaces $V$ with basis $\{v_i \mid v_i \in \mathbb{N}\}$ and subspace $W$ with vectors $w_i, i \in \mathbb{N}$? I don't see what is different now? What can or can't I do? I know that infinite linear subspaces don't really have a basis, or it's at least hard to define them. Therefore it's also nearly impossible to express them in a matrix. How to approach this problem?
Your proof is ok in that it provides a correct linear transformation (its matrix indeed coincides with $W_{mat}$, where the columns are the coordinates of $w_i$ in the basis $\{v_i\}$). Think about what it means: given any vector in $x \in V$ you express it in a unique way (because $\{v_i\}$ is a basis) as $\sum_{i=1}^n x_i v_i$ (here numbers $x_i \in \mathbb K$ are usually called the coordinates of $x$ in the basis $\{v_i\}$), and then transformation $L$ just replaces $v_i$ in this representation with $w_i = \sum_{j=1}^n w_{ij}v_j$ (here $w_{ij}$ are the coordinates of $w_i$ in the basis $\{v_j\}$). This certainly gives a well defined (do you understand why?) transformation $L: V \to V$, $L: \sum_{i=1}^n x_i v_i \mapsto \sum_{i=1}^n x_i w_i$, which can easily be shown to be linear from the definition (do check that $L(x+y) = Lx + Ly$ and $L(\lambda x) = \lambda Lx$).
The infinite case is not very much different once you recall what the definition of the algebraic (aka Hamel) basis is: $\{v_i\}_{i \in \mathbb N}$ is a basis of $V$ if every nonzero vector $x \in V$ can be uniquely represented as $x = \sum_{i \in M} x_i v_i$, where $M \subset \mathbb N$ is finite and $x_i \neq 0$. In other words every $x$ has a unique representation as a finite linear combination of basis vectors. (This is of course a very natural definition, since no "infinite linear combination" is even defined in linear algebra. So the word "finite" is redundant here, but it seems to be useful to stress that.)