Assume we have a square $n \times n$ matrix $A$, and a tall matrix $B$ that is $n \times m$ (where $m<n$). Now assume that $A$ is non-full rank, and has rank $m$, while $B$ is $n \times m$ and has full rank $m$. Both have rank $m$.
Intuitively, I would guess that $AB$ has rank $m$ almost surely: $B$ takes an $m$ vector, maps it to a higher dimensional space, without losing information, and then $A$ projects that to an $m$ subspace. Intuitively, I would say thattThe only situation where this should lose information, is when $A$ maps exactly orthogonally to the subspace that $B$ maps to.
So only in that very special case would $AB$ have rank lower than $m$. Is this correct?
It depends on the intersection of the kernel (or null space) of the linear transformation $x \to Ax$ and the image of the linear transformation $y \to By$.
Let's do a concrete example suppose $m=2$ and $n=3$. Define
$A=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ and $B=\begin{bmatrix}0 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix}$
Note they both have rank 2. and $$AB =\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix}0 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix}= \begin{bmatrix}0 & 0 \\ 1 & 0 \\ 0 & 0 \end{bmatrix} $$ has rank 1
The bigger the difference between $m$ and $n$ the worse this effect can be If $n\ge 2m$ then it can be worse.
For example if $n=4$ and $m=2$
$A=\begin{bmatrix}1 & 0 & 0 &0 \\ 0 & 1 & 0&0 \\ 0 & 0 & 0&0 \\ 0 & 0 & 0&0 \end{bmatrix}$ and $B=\begin{bmatrix}0 & 0 \\ 0& 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix}$ both have rank 2 but theproduct $AB$ is the zero matrix.
The almost surly arguments
Lets look at the $m=1$ case then $B$ is a nonzero $n \times 1 $ matrix. It's possible that $AB$ could have rank one (be nonzero) but for that to happen the vector $B$ would need to be outside of the $(n-1)$ dimensional null space of $A$. Using the Lebesgue product measure on $\mathbb{R}^n$ this set isn't contained in any open ball so I suppose you could say that it has measure zero.
Let's look at $m>1$ and $n>m$ then we want to know the does the null space $\text{Null}(A)$ of $A$ intersect the column space $\text{Col}(B)$ of $B$ in more than the zero vector. Both $\text{Null}(A)$ and $\text{Col}(B)$ have lebesgue measure zero as subspaces of $\mathbb{R}^n$. So it's indeterminant what this would mean as a probability ($\frac00$ form).
Here's my attempt to try to make this problem a little more well-defined. Suppose that $A$ is $n\times m$ of rank $m$ then $A \sim \begin{bmatrix}I_m & 0 \\ 0 & 0\end{bmatrix}$ where $I_m$ is the $m\times m$ identity matrix. In particular there is an invertible matrix $P$ such that $PA=\begin{bmatrix}I_m & 0 \\ 0 & 0\end{bmatrix}$.
Let's just pretend for now that a $n\times m$ matrix $B=[b_1,b_2,\ldots,b_m]$ with i.i.d. coefficients will almost surly have rank $m$. If $AB$ has rank $m$ then $PAB$ will have rank $m$ for all invertible matrix $P$ so we can reduce the question to whats the probability that $\begin{bmatrix}I_m & 0 \\ 0 & 0\end{bmatrix}B$ has rank $m$ for a random $n\times m$ matrix $B$. Note the matrix $\begin{bmatrix}I_m & 0 \\ 0 & 0\end{bmatrix}$ is the projection matrix onto the subspace $H$ of the first $m$ coordinates in $\mathbb{R}^n$. So let $\hat{b}_1,\ldots, \hat{b}_m $ be the projection of the column vectors of $B$ into $H$. Then $PAB=[\hat{b}_1,\ldots, \hat{b}_m]=\begin{bmatrix}\hat{B}\\0\end{bmatrix}$ where $\hat B$ is an $m \times m$ matrix. If you believe $\hat B$ has i.i.d. components then the previous assumption would lead you to believe $\hat B$ almost surly has rank $m$ Therefore, $AB=P^{-1}\begin{bmatrix}\hat{B}\\0\end{bmatrix}$ almost surly have rank $m$.
Remaining Question: Does $n\times m$ matrix $B$ with i.i.d. coefficients almost surly have rank $m$?
Simulations
I really didn't want to believe this so I simulated this in sage
Here are my results for different $n$,$m$ and, $N=\#\text{trials}$ values $$\begin{array}{|r|r|r|r|r|} N & n & m & \text{imperical probability}\\ \hline 10^5 & 5 & 3 & 99717/100000 \\ 10^5 & 10 & 3 & 99542/100000 \\ 10^5 & 100 & 3 & 99408/100000\\ \end{array}$$
I found these differences attributable to the pseudorandom nature of the radomization algorithm='echelonizable' in sage. Maybe it's not able to truly pick i.i.d. components. Though I may be wrong here.