Define, $\Phi(x,z):=\#\{n\le x: n \text{ is not divisible by any prime }<z\}$
$\Psi(x,z):=\#\{n\le x:\text{ if }p|n \text{ then }p<z\}$.
Prove that, $\displaystyle \Phi(x,z)=x\sum_{d|P_z,d\le x}\frac{\mu(d)}{d}+O(\Psi(x,z)).$ where $\displaystyle P_z=\prod _{p<z}p$.
After some calculation I got that, $$\Phi(x,z)=x\sum_{d|P_z,d\le x}\frac{\mu(d)}{d}+O\left(\sum_{d|P_z}\mu(d)\right).$$ From here how can I deduce the relation between $\Psi$ function ?
Any idea?
I guess you started by using the inclusion-exclusion principle to compute $\Phi(x,z)$, thus \begin{align} \Phi(x,z) &= \sum_{d \mid P_z} \mu(d)\biggl\lfloor\frac{x}{d}\biggr\rfloor \\ &= \sum_{\substack{d \mid P_z \\ d \leqslant x}} \mu(d)\biggl\lfloor\frac{x}{d}\biggr\rfloor \end{align} since $\bigl\lfloor\frac{x}{d}\bigr\rfloor = 0$ for $d > x$. Then you split $\lfloor t\rfloor = t - \lbrace t\rbrace$ and used $t$ to obtain the main term $$x\sum_{\substack{d \mid P_z \\ d \leqslant x}} \frac{\mu(d)}{d}\,.$$ Then however you made two mistakes in bounding the error term. On the one hand, for finite $S \subset \mathbb{N}$ we cannot conclude $$\sum_{d \in S} \mu(d)\biggl\lbrace\frac{x}{d}\biggr\rbrace \in O\Biggl(\sum_{d \in S} \mu(d)\Biggr)$$ just because $0 \leqslant \lbrace t\rbrace < 1$. There typically is much cancellation in $\sum_{d \in S} \mu(d)$, and it is possible that the largeness and smallness of $\bigl\lbrace \frac{x}{d}\bigr\rbrace$ is strongly correlated with the sign of $\mu(d)$ so that we have little cancellation in $\sum_{d \in S} \mu(d)\bigl\lbrace \frac{x}{d}\bigr\rbrace$. In the case where $S$ is the set of divisors of an $n > 1$, the cancellation in $\sum \mu(d)$ is extreme, for $$\sum_{d \mid n} \mu(d) = \begin{cases} 0 &\text{if } n > 1, \\ 1 &\text{if } n = 1. \end{cases}$$ And $P_z > 1$ for $z > 2$, but $$\sum_{d \mid P_z} \mu(d)\biggl\lbrace \frac{x}{d}\biggr\rbrace$$ will rarely be $0$. We do however have the trivial estimate $$\Biggl\lvert \sum_{d \in S} \mu(d)\biggl\lbrace \frac{x}{d}\biggr\rbrace \Biggr\rvert \leqslant \sum_{d \in S} \biggl\lvert \mu(d)\biggl\lbrace \frac{x}{d}\biggr\rbrace \biggr\rvert \leqslant \# S\,.\tag{$\ast$}$$ The other mistake you made is that you dropped the $d \leqslant x$ constraint in the sum of the error term. If we keep that, we find $$\Biggl\lvert \Phi(x,z) - x\sum_{\substack{d \mid P_z \\ d \leqslant x}} \frac{\mu(d)}{d}\Biggr\rvert = \Biggl\lvert \sum_{\substack{d \mid P_z \\ d \leqslant x}} \mu(d)\biggl\lbrace \frac{x}{d}\biggr\rbrace \Biggr\rvert \leqslant \# \bigl\{ d \leqslant x : d \mid P_z\bigr\}$$ using $(\ast)$. And now all that is missing is the observation that $$\bigl\{ d \leqslant x : d \mid P_z\bigr\} \subseteq \bigl\{ n \leqslant x : p \mid n \implies p < z\bigr\}$$ which immediately yields $$\Biggl\lvert \Phi(x,z) - x\sum_{\substack{d \mid P_z \\ d \leqslant x}} \frac{\mu(d)}{d}\Biggr\rvert \leqslant \# \bigl\{ d \leqslant x : d \mid P_z\bigr\} \leqslant \Psi(x,z)\,.$$ Thus we not only get the desired $O\bigl(\Psi(x,z)\bigr)$ error bound, we even get an explicit constant, the modulus of the error term is bounded by $\Psi(x,z)$.