This is after this other question answered. In the previous question, I understood that to look for a collision without Floyd's algorithm would be space costly. Now I have this new question.
Suppose we have a collision mod $p$ given two random numbers. Then AFAIK we're guaranteed to have found a factor of $N$, whether it's $N$ itself or some non-trivial factor. This will be discovered by the gcd computation. So why should I look for collisions directly in the sequence mod $N$? Won't the gcd tell me so?
To look at the question a little closer, I wrote the following program. It uses the random number generator from Python and checks whether the algorithm has had a certain amount of chance. If it goes beyond that limit, we stop with failure. (I chose the limit $2 \sqrt{N}$.)
def rho_random(N):
chances = 0
def random_mod_N():
i = random.randint(1, N - 1)
return i
while True:
if 2*int(sqrt(N)) <= chances:
return "unlucky"
a = random_mod_N()
b = random_mod_N()
d = gcd(a - b, N)
chances = chances + 1
if d == N:
return "trivial factor"
if 1 < d and d < N:
return d
Thank you.
Greatest common divisor computations are expensive in time, compared to not doing them. The variant of Pollard and Brent takes, for example, $100$ candidates, multiplies them together, then computes the $\gcd$. This replaces $100$ $\gcd$s with $99$ multiplies and $1$ $\gcd$. Multiplies (modulo $N$) are vastly faster than $\gcd$s ($O(\log n)$ faster from here).
Pollard and Brent describe and analyze this variant in Brent, Richard P. (1980), "An Improved Monte Carlo Factorization Algorithm", BIT 20: 176–184, doi:10.1007/BF01933190. This idea is described in section 5 of the cited paper. In section 7, they discuss not even multiplying in $|x_i-x_j|$ with $i$ and $j$ too close. With this modification, their runtimes reduce by about 24%.
(As regards your prior question, they have an analysis in section 8 of empirical data for the sufficiently random-ness of iterated polynomials for this method.)