Suppose that we have a sequence of random variables $X_1,X_2,\ldots$. Suppose that $Q$ and $P$ are probability measures such that this sequence is i.i.d. with respect to both probability measures. Denote by $\mu_P$ and $\mu_Q$ the mean of $X_1$ with respect to $P$ and $Q$, respectively.
On one hand, the law of large numbers tells me that
$$
\lim_n \frac{1}{n}\sum_{i=1}^n X_i=\mu_P\quad P-\text{ a.s.}
$$
$$
\lim_n \frac{1}{n}\sum_{i=1}^n X_i=\mu_Q\quad Q-\text{ a.s.}
$$
On the other hand, suppose now that $Q<<P$, i.e. $P(A)=0$ implies that $Q(A)=0$.
Then, the first equality means that there exists $A$ with $P(A)=1$ such that $\lim_n \frac{1}{n}\sum_{i=1}^n X_i(\omega)=\mu_P$ for all $\omega\in A$.
Since $Q<<P$, then $Q(A)=1$. Then we derive that $$ \lim_n \frac{1}{n}\sum_{i=1}^n X_i=\mu_P\quad Q-\text{ a.s.} $$
Consequently, $\mu_P=\mu_Q$. But this must be wrong, given that I can think of examples where $\mu_P$ and $\mu_Q$ are not necessarily equal.