For two random square matrices $A$ and $B$, it is known that $tr(AB)=o_{p}(1)$. ($c=o_{P}(1)$ simply means that $c$ is less than any arbitrary small positive number in probability.)
It is also known that the probability limit of matrix $A$ is strictly positive definite, and that of matrix $B$ is positive semi-definite.
Given this, how do we know that $B=o_{p}(1)$ for sure?
Here is a proof for the deterministic case. I will leave you to extend it to the probabilistic setting.
Let $\lambda_1(M) \le \dotsb \le \lambda_n(M)$ denote the eigenvalues of a symmetric matrix $M$. Thus, for a positiv semi-definite $B$, we have $$ tr(AB) = tr(\lambda_1(A)B) + \underbrace{tr((A-\lambda_1(A)I)B)}_{\ge 0} \ge \lambda_1(A) tr(B) = \lambda_1(A) \sum_{I=1}^n \lambda_i(B). $$ As $A$ is positive definite and every pair of norm on $\mathbb R^n$ are equivalent, we have $$ \|B\|_F \le \frac c{\lambda_1(A)} tr(AB) $$ for a suitable constant $c > 0$ depending only on $n$. In particular, if $\lambda_1(A)^{-1} = O(1)$ then it follows $\|B\|_F = o(tr(AB)) = o(1)$.
In general it is wrong. Consider $A = \frac{1}{k^2}$ and $B=k$. Then $AB = \frac1k = o(1)$, but $B\ne o(1)$.