Let's say I have a range of numbers starting from 1 to 10^9 and I need to prime factorize each one of them.My basic algorithm is:
1.Use prime-sieve algorithms(Atkins or Eratosthenes(segmented approach more preferable to avoid Buffer overflows)) to generate primes less than 10^9.
2.Divide each n in the list with these primes in ascending order till the numerator becomes one and use of count array to store the powers of each prime needed to factorize the given number.
Even though this basic algorithm works fine I need to optimize it in any way possible to increase it's speed.I also want to know is there anyway to guess the minimum range of primes required to factorize a given number.For ex:The naive algorithm to check if a given number n is prime or not is to check for possible divisors from 2 to sqrt(n)....I want to know if it's enough to generate primes to a certain limit to prime-factorize a given n.To be more precise:
999,824 = 2^4 × 7 × 79 × 113
My algorithm would generate primes in the range of 1 to 999824 to prime factorize the above given n and it's a solid waste of resources and degrades the performance and hence I want to know if it's possible to know in beforehand that it would be enough to find primes less than say 115 to factorize above given n.
All of them? The eratosthenes seive, or rather something similar, will probably win: most non-seiving systems need to duplicate their work for each prime, as opposed to getting some economy of scale. You don't need to generate any numbers higher than sqrt(highest number you care about).
Also instead of doing this in two passes, simply give each number a pair of data points when you hit it: the first prime you hit that divides it evenly, and what that number divides it into. So 522 would have the data 2, 261. Any number that doesn't have one of these is prime; all others can be factored by appending its prime factor to the factor list of its prime factor's counterpart. So factors(522) = 2, factors(261) = 2, 3, factors(87) = 2,3,3,factors(29) = 2,3,3,29.