I'm aware of analogous threads; I hope that mine is specific enough not to be esteemed one.
$\mathbf{a^i}$ is a row vector. $A, B$ are matrices. Prove: $1$. $\mathbf{a^i}B$ is a linear combination of the rows of $B$.
$2.$ Row space of $AB \subseteq$ row space of $B$. $\qquad$ $3.$ Column space of $AB \subseteq$ Column space of $A$.
$4.$ If $\mathbf{a_i}$ is a column vector, then $A\mathbf{a_i}$ is a linear combination of the columns of $A$.
$5. \operatorname{rank}(A\color{#B8860B}{B}) \color{#B8860B}{\le} \operatorname{rank}\color{#B8860B}{B} \qquad \qquad$ $6.\operatorname{rank}(AB) \leq \operatorname{rank} A$.
In general, $x \leq a \text{ & } x \le b \implies x \le \min\{a, b\}$.
So by $5 \, \& \, 6$, $\operatorname{rank}(AB) \leq \min\{\operatorname{rank}A,\operatorname{rank} B\}$.$\bbox[2px,border:2px solid grey]{\text{ Proof of #5 :}} \;$ The rank of a matrix is the dimension of its row space. Need to show :
If $\operatorname{rowsp}(AB) \subseteq\operatorname{rowsp}(B)$, then $\operatorname{dim rowspace}(AB) \le \operatorname{dim rowspace}(B). $
Pick a basis for $\operatorname{rowsp}(AB)$. Say there are $p$ vectors in this basis.
By $\#2$, row space of $AB \subseteq$ row space of $B$, $\color{green}{\text{so all of these $p$ vectors also $\in \operatorname{rowsp}(B)$}}$. Moreover, they must be linearly independent (hereafter dubbed l-ind).
${\Large{\color{red}{[}}} \;$ Since the dimension of a space $=$ the maximum number of l-ind vectors in that space, $\; {\Large{{\color{red}{]}}}}$
and $\color{green}{\text{$\operatorname{rowsp}(B)$ has $\ge p$ l-ind vectors}}$, thus $ \operatorname{dim rowspace}(B) \; \ge \; \operatorname{dim rowspace}(AB) = p. $$\bbox[2px,border:2px solid grey]{\text{ Proof of #6 :}} \;$ Apply $ \operatorname{rank}M = \operatorname{rank}M^T$ and $\#5$: $ \operatorname{rank}(AB)^T = \operatorname{rank}(B^T\color{#B8860B}{A^T}) \quad \color{#B8860B}{\le} \quad \operatorname{rank}\color{#B8860B}{A^T} = \operatorname{rank}(A)$.
$Q1.$ Please elucidate the above proof of $5$? I'm bewildered. What's the strategy?
$Q2.$ On P209, Poole defines dimension as the number of vectors in a basis.
So shouldn't the red bracket refer to a basis? If so, why doesn't the proof simply declare:
By $2$, the basis for $\operatorname{rowsp}(AB)$ can be reused as a basis for $\operatorname{rowsp}(B).$ ?
$Q3.$ How'd one previse to invert $AB$ and apply $\#5$ (the key strategem) for #6?
$Q4.$ What's the intuition behind results $5$ and $6$? I'd be grateful for pictures.
Sources: P147, 4.48, Schaum's Outline to Lin Alg, web.mit.edu/18.06/www/Spring01/Sol-S01-5.ps
The following results are facts of linear algebra:
There is a lot of baggage in linear algebra but otherwise it is straightforward. Another very important observation to make is that generally you want to understand problems using linear trasnformation as they have a far richer language and structure, but you want to do application using matrices. So, showing things related to spaces, projections, decompositions, etc, are far better understood using linera transformation, but doing application is more useful using matrices.