I am learning basic number theory and as far as I could read, basically all the primality tests (or proven primality theorems) that are able to decide if a given $n$ is prime (or a special pseudoprime) are based on the comparison of $n$ versus a usually big quantity $\in [n+1,..)$
I am not including prime sieves in the concept of primality test, because they imply normally more comparisons than a primality test (e.g. the Sieve of Eratosthenes)
For instance I am considering as primality tests (not in order of importance, just as I recall them):
- Fermat's little theorem (requires powers, e.g. $2^n$)
- Wilson's theorem (requires $n!$)
- Fibonacci-Lucas primality tests ($F_n$ grows up very quickly)
- Catalan primality tests ($C_n$ grows up very quickly)
- Pascal's triangle binomial coefficients modularity (requires binomial coefficients so factorials grow up very quickly)
- Any other probabilistic tests.
- etc.
All those primality theorems and properties, when applied to their tests require a set of several numbers bigger than the candidate $n$ to be tested.
"Under $n$" there are prime number sieves, but if I am not wrong there are not primality test based on some comparison with a value $k \in [0,n]$ in the way that the above mentioned theorems are applied, and all the proven theorems seem to imply that any primality property is obtained by comparing values "over $n$" and not "under $n$", but is that point a necessary condition?
I understand that one possible answer is just that "they were not found yet, if they exist" but in other hand: is there a prove that any primality property in the fashion of the above theorems is possible if and only if the comparison is done with a number bigger than the candidate $n$?
So the question I would like to share is:
Why are not there primality tests (or primality properties, or theorems of primality) based on comparisons of $n$ and some value $k \in [0,n]$? Is there a proven theoretical impossibility behind it?
Thank you!
(If there are online references to this topic, they are also very welcomed, and as usual I will remove the question is is not very clear, please just let me know)
The quick answer is that all algorithms involve numbers modulo $n$, and in particular you never go beyond $n$. The algorithms give you a recipe – an algorithm – of computing some numbers modulo $n$ and comparing them. Sometimes this recipe can be summarized as an expression which, if not performed modulo $n$, will be larger than $n$. Such a succinct representation is misleading, since all arithmetic is done modulo $n$, and even intermediate results never get larger than $n^2$ (since everything reduces to addition and multiplication).