Say there is a set of numbers of size $N$ where $N > 1,000,000,000$, and for each $i$ in $N$: $0\leq i\leq 1000$. In other words, there are over a billion numbers between $1$ and $1,000$.
I need to sample this set of numbers to estimate how many times each number appears in the population, i.e. I need to estimate how many $1$'s are included, how many $2$'s are included, how many $3$'s are included, and so on...
How do I determine the needed sample size based on my desired confidence level and margin of error? For example, if I want a $95%$ confidence level and $5%$ margin of error (the estimated population proportion for each number is within $5%$ of the real proportion, $95%$ of the time), how many samples do I need to take?
I've found this formula for calculating sample sizes on large populations:
$$ S = \frac{z^2p(1-p)}{e^2} $$
Where $S$ is the needed sample size, $z$ is the z-score derived from the confidence level, $p$ is the standard deviation, and $e$ is the margin of error.
Since $p$ is unknown, a value of $0.5$ can be used which is the worst case because it maximizes the value of $p(1-p)$.
So for example, if I want a $95%$ confidence level (z-score of $1.96$) and a $5%$ margin of error:
$$ S = \frac{1.96^2 \cdot 0.25}{0.05^2} = \frac{3.8416 \cdot 0.25}{0.0025} = \frac{0.9604}{0.0025} = 384 $$
The formula says I only need a sample size of $384$ for this confidence level and margin of error. Intuitively, this doesn't seem sufficient since there $1,000$ possible values for each number. A sample size of $384$ doesn't even provide one sample for each possible value. This sample size seems like it would work for a small number of possible values, e.g. for each $i$ in $N$: $1 \leq i \leq 2$. But I don't see how it works in this situation.
How can I calculate the needed sample size for populations with a large number of possible values?