I couldn't find this elsewhere so I thought I'd give it a try to figure out exactly how many numbers a typical desktop computer can represent in memory. I'm thinking about this in the context of numerical algorithms to solve mathematical problems.
I'm pretty sure single precision (SP) numbers, along with the typical 4- and 8-byte signed and unsigned integers are all a subset of the representable numbers in double precision (DP), so I think computing the number of representable numbers in DP would answer the question. Let's assume IEEE double precision, a very typical architecture found on most machines : 1 sign bit, 11 exponent ($e$) bits and 52 mantissa bits.
First, the normalized numbers (assumed leading 1 in the mantissa). With 52 available bits, there are $2^{52}$ different mantissas possible. For each mantissa there is an exponent associated with it. Note $e \in [0, 2047]$ for IEEE DP, but $e=0$ and $e=2047$ have special meanings ($0$, $NaN$, subnormalized numbers and $\pm \infty$, depending on the mantissa). So we actually have $2046$ different exponents available for normalized numbers. Also for each mantissa there are 2 signs available, $+$ and $-$.
Next, the subnormalized numbers (no leading 1 assumed in mantissa). Each subnormalized number still has $2^{52}$ bits available, but are characterized by $e = 0$, so only one available value for $e$. Again for each mantissa there are 2 signs available, $+$ and $-$.
Finally, the four special values $0$, $NaN$, $+\infty$ and $-\infty$ can be represented.
Putting these together, there are total of $$ 2 \cdot 2046 \cdot 2^{52} + 2 \cdot 2^{52} + 4 = 1.8437736874454810628E19 $$ (18.4 quintillion!) numbers representable on a typical computer.
Does this seem correct? Does anyone know of a good resource to verify it? I'm afraid I double counted something, or left a significant set of numbers out.
There are 8-byte signed or unsigned integers that cannot be represented as a double-precision floating point.
For instance, the range of 64-bit unsigned integers is from 0 to $2^{64}-1$. This is a total of $2^{64}=18446744073709551616$, the maximum possible for a 64-bit value. Signed integers represent a different range, but the total number of integers possible is the same.
Evidently double-precision floating points, due to the NaNs, cannot represent so many numbers. If I'm not mistaken, every double-precision floating (except zero) has a unique representation. This means that we need exclude only the redundant NaNs. If the exponent is
7FF, then the value represented must be one of the infinities or NaN. There are $2^{53}=9007199254740992$ such numbers, of which only three are unique (NaN, $+\infty$, and $-\infty$).If $0$ and $-0$ are considered distinct, there are then: $$2^64-2^53+3=18437736874454810627$$
such numbers. If $0$ and $-0$ are considered equal, then there is one fewer. Your number is almost correct, but you double-count zeros (once as a subnormal number, and once as a special value).
(Also, it's worth noting that this limit is pretty artificial. Some [rare] computers support quadruple-precision floating points natively, for example. And of course, there are BigNumber and BigDecimal software implementations that support numbers of any size that will fit in memory.)