I'd like to more intuitively understanding of rowspace, and in particular the fact that if $Ax = b$, there exists a unique $p \in Rowspace(A)$ such that $Ap = b$. (See here for a question that asks for a geometric understanding.) The proofs are clear; I'm looking now to build intuition around them.
Let $T: \mathbb R^n \to \mathbb R^m$ be a linear transformation with domain $\mathbb R^n$. The domain of $T$, namely $\mathbb R^n$ can be partitioned into equivalence classes by $N = Nullspace(T)$, with each $v \in N$ corresponding to exactly one class.
More specifically, let the equivalence class $E_v \in \mathbb R^n$ be the equivalence class induced by $v \in N$. Then $E_0 = Rowspace(T)$, and, for any $v \in N, E_v = E_0 + v$. There is a bijection between $E_0$ and $E_v$ defined by adding $v$ to each element of $E_0$,namely $f: E_0 \to E_v, f(x) = x + v$.
Intuitively, that suggests we view the domain of a linear transformation as $|N|$ "copies" of its rowspace. If the transformation is invertible, $|N| = 1$, there is only one "copy", and the partition is trivial. If the transformation is not invertible, the partition has dimension of the nullspace, and the "number" of copies, while infinite, has "dimension" $\frac n {\dim rowspace}$.
Now, why should the rowspace partition the domain? We tend to think of a matrix being defined by its columns, which act to produce the image $\subset$ codomain. What are the rows doing partitioning the domain? Isn't the domain the given, the starting point, the independent variable?
I believe the correct answer is: $Ax$ is a linear combination of the columns of $A$. Thus, $A$'s columns, the vectors that build $A$, define the codomain. In contrast, $A$'s rows are covectors, and therefore structure the domain. $A$ can, if it were, only "see" the domain through the eyes of its rows. If $A$'s rows are fine enough (invertible matrix), they can distinguish any vector in the domain. But, if they're coarse, they can only distinguish between vectors in the domain (the rowspace), and they collapse all other vectors in the domain into one that they can see (equivalence class). $A$ sees input vectors only through the eyes of its rows, $A$ maps output vectors only by combining its columns.
That is, $Rowspace(A)$ can be thought of as: The subset of vectors in $A$'s domain that can distinguish between.
Taking this further: If $R = Rowspace(A)$, then $Ax = Ap$, where $p = \operatorname{proj_{R}}(x)$. Thus, $A$ sees $x$ only through the eyes of its rowspace; only those parts of $x$ which have a footprint in this space can be perceived by $A$. Every row of $A$ gives $A$ a new probe by which to "sense" $x$. Since $Ax$ is composed of rows, each one equal to $x \cdot A_i$, any part of $x$ that is orthogonal to $A_i$ is lost by that "sensor". If a part of $x$ is orthogonal to all of $A$'s rows, it is lost to all of $A$, and thus $A$ is blind to it. Thus, $A$ sees through its rowspace, which therefore defines which elements of its domain it can distinguish, collapsing all others into copies.
Is the above correct? Are there alternative conceptions? How can it be turned into something more precise and coherent?