The probability that $s_1$ and $s_2$ have no common factors is given by $$ 1-\sum_q P(q|s_1)P(q|s_2)\ge 1-\sum_q \frac{1}{q^2} $$ where $q$ is any prime number and $s_1,s_2$ are chosen uniformly at random from $0,1,\cdots,r-1$
Here $P(q|s_i)$ is the probability of $q$ dividing $s_i$.
In my reference, it is proven by noting that $P(q|s_i)\le\dfrac{1}{q}$, and further to prove that $1-\sum_q P(q|s_1)P(q|s_2)$ is greater than or equal to $1/4$.
It'd be helpful if one could help to justify the idea that $P(q|s_i)\le\dfrac{1}{q}$ ?
Screenshot of the original context from my reference, Page 231, Quantum Computation and Quantum Information by Nielsen and Chuang, is :

Let $n$ be the largest integer such that $nq\leq r$, then there are $n$ elements of $\{1,\dots,r\}$ which are divisible by $q$. The probability then is given by $$P(q|s)=\frac{n}{r}.$$ And since $nq\leq r$, then $n/r\leq 1/q$. If you include $0$, then you just have to change $n$ for $n+1$, and $(n+1)/r\leq 1/q + 1/r\leq 2/q$. I believe in the book they must have meant $\{1,\dots,r\}$ and not $\{0,1,\dots,r-1\}$, otherwise the inequality doesn't hold as @ckefa pointed out in their comment.