The sieve of Eratosthenes is an algorithm to calculate all the primes up to $n$.
It works by iterating $i$ from $1$ to $n$, and at each time strikes out the multiples of $i$.
In many optimizations, I'm seeing that we can actually stop at $i \leq \sqrt n$ but I don't understand why.
The explanations I found are all based on this hypothesis:
Every composite number has at least one prime factor that's smaller than its square root.
Though I understand this hypothesis, I can't draw a conclusion with it.
Programmatically, I see that it makes sense if we consider an optimization on how we'd strike the multiples of $i$ by starting from $i^2$: we'd end up striking the multiples of $\sqrt n$ starting at $n$, so there's no point in iterating $i$ further.
But mathematically, I don't see how by stopping at $\sqrt n$, we can be sure that all the remainings unvisited integers are primes with the sole hypothesis above.
Thanks a lot for your hints.
EDIT: I see that my question is associated with another question, but if you read the other thread, OP specifically stated they don't want to know why we can stop at $\sqrt n$ but why we are picking $\sqrt n$.
You can argue by contradiction. To make things clear, when I talk about a number's 'prime factors' I'm going to count multiple instances of the same prime distinctly; for instance, $36=2^2\cdot3^2$ has four prime factors: $\{2, 2, 3, 3\}$. (This is sometimes referred to as a multiset of prime factors, but that's an aside...)
Suppose there were a non-prime left after you've sieved up to $\sqrt{n}$. Then it must have at least two prime factors (by definition), and each of those factors must be larger than $\sqrt{n}$ (because that's the guarantee from the sieving you've done). This means that the number itself must be larger than $\sqrt{n}\cdot\sqrt{n}=n$ — but we were only looking at numbers up to $n$.