It is well known that any linear map between two finite-dimensional vector spaces, say $f: \mathbb{R}^n \to \mathbb{R}^m$, corresponds to a matrix $M \in \mathbb{R}^{n \times m}$ such that $f(x) = Mx$ for all $x$, and vice versa.
I'm interested in a slight variation, where we have a map $g: \mathbb{R}^{b \times c} \to \mathbb{R}^{a \times c}$ and want to understand under which circumstances we can find a matrix $M \in \mathbb{R}^{a \times b}$ such that $g(A) = M A$ for all $A$.
Obviously, $g$ must be linear, but in contrast to the vector case mentioned before, linearity of g is not sufficient: Take the map
$$ A = \left(\begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix}\right) \mapsto \left(\begin{matrix} a_{11} & 0 \end{matrix}\right) =: g(A), $$ which clearly is linear on $\mathbb{R}^{2 \times 2}$. Assume that $g(A) = M A$ for some
$$M = \left(\begin{matrix}m_{11} & m_{12}\end{matrix}\right) \in \mathbb{R}^{1 \times 2}, $$
then
$$ g(A)_{11} = (MA)_{11} = m_{11} a_{11} + m_{12} a_{21} \stackrel{!}{=} a_{11}, $$
so (since $a_{11}$ and $a_{21}$ are arbitrary), it follows that $m_{11} = 1$ and $m_{12} = 0$, from which we can conclude that
$$ g(A)_{12} = (MA)_{12} = m_{11} a_{12} + m_{12} a_{22} = a_{12}, $$
and hence, in general, $g(A)_{12} \ne 0$. Hence such an $M$ cannot exist.
Is there a nice and intuitive representation of such maps $g$ similar to the correspondence between linear maps on vectors and their matrix representation?
Setting up some notation :
Given a linear map $g : \mathbb R^{b \times c} \to \mathbb R^{a \times c}$.
First, we consider the maps $f_i : \mathbb R^{a \times c} \to \mathbb R^{a \times 1}$ given by taking the $i$th column of the given matrix. So we have $f_1,...,f_c$.
Next, given a vector in $\Bbb R^b$, we have the maps $h_j : \mathbb R^{b \times 1} \to \Bbb R^{b \times c}$ given by considering a $b \times c$ matrix, "fitting" the given vector as the $j$th column and filling the rest of the entries as zeros. So we have $h_1,...,h_c$.
Now, it is easy to see that $f_i,h_j$ are linear maps. For example, $f_i$ is right multiplication by the vector $e_i \in \mathbb R^{c \times 1}$ with a $1$ in the $i$th position and zero elsewhere. Similarly, $h_j$ is right multiplication with the vector $e_i^T \in \mathbb R^{1 \times c}$.
The composite maps $f_i \circ g \circ h_i$ are maps from $\mathbb R^b$ to $\mathbb R^a$ which are linear. We have ensured that $\mathbb R^b$ and $\mathbb R^a$ are represented by column vectors so that with what we know this is given by a matrix : $f_i \circ g \circ h_i$ acts like some $M'_{i} \in \mathbb R^{a \times b}$. So we get a family of matrices. That is, $M_{i}$ takes a vector, feeds it as the $i$th column, operates $g$ and removes the $i$th column.
CLAIM : $g$ can be represented by a matrix if and only if $M'_{i} = M'_{j}$ for all $i \neq j$.
Proof : Suppose that $g(A) = MA$ for some matrix $M$. Then, fix $i$. Then, $M'_{i}b = e_i M (b e_i^T) = e_i(Mb)e_i^T$ by associativity. Now, merely in words, the right hand side takes the vector $Mb$, puts it as the $i$th column of a matrix with the rest as zeros (which $e_i^T$ does) then removes the $i$th column of that matrix (which $e_i$ does) and obviously this is just $Mb$ again.
In other words, $M'_i = M$ for all $i$ , so all the $M'_i$ are equal.
For the other way, suppose that $M'_i = M'_j$ for all $i \neq j$. Call the matrix $M'_1 = N$. We claim that $g(A) = NA$ for all $A$.
Now, fix $A$. Note that $A = \sum_{i=1}^c A_i e_i^T$ where $A_i$ is the $i$th column of $A$. Therefore, $$NA = \sum_{i=1}^c N(A_ie_i^T) = \sum_{i=1}^c (NA_i)e_i^T$$. By definition, $N = M_i'$ so we get $$\sum_{i=1}^c (M_i'A_i)e_i^T = \sum_{i=1}^c (e_i g(A_i e_i^T))e_i^T = \sum_{i=1}^c e_i (g(A_ie_i^T))e_i^T$$
Now, again note what is happening here: $e_i^T$ takes the column vector $g(A_ie_i^T)$ and makes it the $i$th column of a new matrix, making the rest zeros. Then , $e_i$ removes the $i$th column of that matrix, which is just $g(A_ie_i^T)$. Therefore, each term is just $g(A_ie_i^T)$!
So we get by linearity of $g$ : $$ \sum_{i=1}^c e_i (g(A_ie_i^T))e_i^T = \sum_{i=1}^c g(A_ie_i^T) = g\left(\sum_{i=1}^c A_ie_i^T\right) = g(A) $$
as desired.
Therefore, $g$ can be written as a matrix if and only if these matrices $M'_i$ coincide.
What does this mean? How can I write it in words?
Now I will act like a $g$ that can be represented by a matrix. I pick a linear transformation from $\mathbb R^b \to \mathbb R^a$. Now, imagine I am given a $b \times c$ matrix. I take the first column of this matrix, operate the linear transformation on it, then make it the first column of a new $a \times c$ matrix. I now move o the second column, operate the linear transformation on it, make that the second column of that new matrix, and so on.
Key points : I have only a linear transformation, so each column of $g(A)$ is depending only on the corresponding column of $A$. Furthermore, I am not changing the linear transformation while going across the columns, so the way a column of $g(A)$ depends on that column of $A$ , is ia the multiplication by that linear transformation which is independently chosen of the column number!
Thus, I have acted like $g$.
For example, the map you have given does not behave like this, because it gives out the first component of the first column, but gives zero on the second column, so the dependence of the second column is different from that of the first column.
However, if the image was $(a_{11} , a_{12})$, then indeed you can check that the action is independent on columns and is the same i.e. the first component. This corresponds to the matrix $M = [1,0]$.