First off, when I typed in the title for this question a few suggested similar questions came up that were good, but they didn't seem to be taking the same angle. So I'm still going to post this.
Most of my understanding of coordinate systems comes from Lay's textbook, and in it he seems to not consider coordinate systems in relation to proper subspaces. To give an example,
$$\beta = \left\{ \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} \right\}$$
is a basis for a plane in $\mathbb{R}^3$ which is itself of course a vector space. I want to consider the coordinate transformation on this space. If I were to blindly follow Lay, the transformation would correspond to
$$\begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} c_1 \\ c_2 \end{bmatrix}$$
where on the left is the general element in the planar vector space, the matrix on the right is $P_\beta$ ( the change-of-coordinate matrix) and the rightmost vector is the $\beta$-coordinates of the vector on the left. Lay makes the argument that this will always be invertible and multiplying by $P_\beta^{-1}$ will yield the coordinate mapping. Of course, he is assuming that the original space was not a subspace so the matrix is square, which we don't have here.
I presume it could be shown that in the subspace case, as seen here, that $P_\beta$ will always be left invertible and the coordinate mapping could be achieved using that. But I'm wondering about other explanations. The given subspace has dimension 2, so it seems like this should really be viewed as a basis with two components. If I were to consider calculating the $[x]_\beta$ with
$$x=\begin{bmatrix}a \\ b \\ c \end{bmatrix}$$
assumed to be in the plane, I would solve the augmented system
$$\begin{bmatrix} 1 & 1 & a \\ 1 & 2 & b \\ 1 & 0 & c \end{bmatrix}$$
which row reduces to
$$\begin{bmatrix} 1 & 1 & a \\ 0 & 1 & b-a \\ 0 & 0 & -2a+b+c \end{bmatrix}.$$
Assuming $x$ really is in the plane, then $-2a+b+c=0$ so if we only retain the relevant information we really have
$$\begin{bmatrix} 1 & 1 & a \\ 0 & 1 &b-a \end{bmatrix}.$$
I'm tempted to think about the coordinate transformation as something like
$$\begin{bmatrix} a \\ b-a \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix}$$
where I suppose implicitly $c=2a-b$. Does something like this make sense? I feel like there should be a way to formulate the coordinate mapping of a subspace that results in a 'square' relationship. But I'm not sure what a general argument would be that isn't connected to specific examples like this.
Your first formula makes sense and is the most direct solution you developed:
$$ \begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix}. $$
The two coordinates $(c_1,c_2)$ give you a linear combination of the basis vectors, which gives you the three coordinates $(a,b,c)$ in the original basis on the left. This is exactly what it means for the coordinates $(c_1,c_2)$ to express a vector in the subspace using the basis $\beta.$ I don't see much room for improvement.
This is not so much a transformation as an embedding of one coordinate system in another.
The reason why a textbook might focus on transformations between coordinate systems for the same space, even to the extent of virtually ignoring manipulations such as the one above, is that the same-space transformations can be invertible, enabling all kinds of magic. Your basis conversion is a one-way track: the two-coordinate basis maps to vectors over the three-coordinate basis, but the vast majority of vectors in three coordinates are not in the subspace, so there is no coordinate conversion to tell you which combination of the basis $\beta$ an arbitrary three-vector is.
Notice that in order to achieve a square matrix you had to accept an auxiliary equation to give you the coordinate $c$: something you absolutely need which is absolutely not provided by the matrix equation at all.
If you really want a square matrix, what you could do is to take your $3\times2$ matrix and adjoin any vector that is not in the span of the basis $\beta$, for example,
$$ A = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 2 & 0 \\ 1 & 0 & 1 \end{bmatrix}. $$
Now if you write
$$ \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} = A^{-1} \begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} 2 & -1 & 0 \\ -1 & 1 & 0 \\ -2 & 1 & 1\end{bmatrix} \begin{bmatrix} a \\ b \\ c \end{bmatrix}, $$
you have a transformation between two sets of basis vectors of the entire space, and if it happens that $c_3 = 0$ for a particular vector $x = [a, b, c]^T$ on the right-hand side, then $x$ was in the subspace and its coordinates in the subspace are $(c_1, c_2).$
You can also regard $A^{-1}$ as the matrix of a projection from the full space onto the subspace with coordinates over the basis $\beta.$ If the vector that we adjoined to make a square matrix was orthogonal to the subspace, this would be an orthogonal projection.