let $|e_i\rangle$ and $|f_i\rangle$ be basis vectors, and matrix $\textbf{S}_{ij}$ that
$|e_j\rangle = \sum_j\textbf{S}_{ij}|f_i\rangle$
so that
$\textbf{a}^f=\textbf{Sa}^e$ where the superscript indicates the basis.
My textbooks says that $\textbf{S}^{-1}$ exists since if $\textbf{S}$ were singular, $|f_i\rangle$ won't span the space, and I don't get this.
Any kind of vectors and matrices we see in undergraduate texts, like $S=\begin{pmatrix}1&1\\0&3\end{pmatrix}$, are closely related to orthonormal vectors in Euclidean spaces. That is, we use the fact that when a basis is given, any vectors in the span of the basis can be represented by the unique linear transformation of the basis. Thus, we use orthonormal vectors in Euclidean space to represent the numbers in a given matrix and vector. For example, the orthonormal basis of $\mathbb{R}^2$ is the set of vectors $e_1 = \begin{pmatrix}1\\0\end{pmatrix}$ and $e_2=\begin{pmatrix}0\\1\end{pmatrix}$. Thus, when we write a vector $a = \begin{pmatrix}7\\5 \end{pmatrix}$, we represent $a$ in terms of the orthonormal basis $\{e_i\}$ as defined above; there are $7e_1$s and $5e_2$s which are added to represent $a$.
As far as I understand, the problem you are dealing with is related to the concept "change of basis". Thus, the basis vectors $\{e_i\}$ and $\{f_i\}$ are spanning the same vector space $V$, and your matrix $S$ is a square matrix with $n\times n$ elements.
You can have numerous kinds of bases representing the same vector space. For example, in $\mathbb{R}^2$, two bundles of vectors $\{e_1,e_2\},\{f_1,f_2\}$ are spanning $\mathbb{R}^2$ when they are defined as $$ e_1 = \begin{pmatrix}1\\0\end{pmatrix}\ \ e_2 = \begin{pmatrix}0\\ 1\end{pmatrix}\ \ f_1 = \begin{pmatrix}1\\1\end{pmatrix} \ \ f_2 = \begin{pmatrix}2\\ 1\end{pmatrix}$$ note that $f_1$ and $f_2$ are not even orthogonal (the angle between them is not rigid). Thus, your problems boils down to represent a given basis $e_j$ in terms of linear transformation $\sum_{i} \alpha^j_i f_i$. And of course, this set of equations can be represented by a matrix, $S$, which is just an ordinary $n \times n$ matrix that makes you to see the linear operator in a numerical way.
Let us get back to the above example in $\mathbb{R}^2$. You can represent $e_1$ and $e_2$ in terms of $f_1$ and $f_2$ by a linear transformation. In this case there are two equations like the following: $$\begin{eqnarray}e_1 = -f_1 + f_2\\ e_2 = 2f_1 - f_2\end{eqnarray}$$ With these equations in mind, note that every vectors and matrices you see in a usual form (numbers assigned on each rows and columns of elements) are in fact represented by a orthonormal basis. By using the above equations, you can find a way to represent vectors in a given space by a basis other than the orthonormal one.
In the example, I deliberately set $\{v_i\}$ as orthonormal vectors. Let us choose a vector $a^e = \begin{pmatrix}8\\6\end{pmatrix}$ from $\mathbb{R}^2$. Then the notation implies that $$a^e = 8e_1 + 6e_2$$ However, by using the equations, you can get $$8e_1 + 6e_2 = 8(-f_1 + f_2) + 6(2f_1 - f_2) = 4f_1 + 2f_2$$ That means $a^f$, which is the same vector with different basis, is denoted as $$a^f = \begin{pmatrix}4\\2\end{pmatrix},\quad a^f = 4f_1 + 2f_2$$ Thus the matrix $S$ can be easily derived. $$S = \begin{pmatrix}-1 & 2 \\ 1 & -1\end{pmatrix} \ \ \because a^f = Sa^e$$ When $S$ is singular, that means the set of equations are linearly dependent and this is a complete nonsense because $e_1$ and $e_2$ are linearly independent by the assumption.
You can try an arbitrary finite-dimensional vector space version of this example to solve your problem.