In double-precision floating-point format there're effective $53$ bits of mantissa stored. This lets us estimate maximum number of decimal digits of precision available: $$N_{max}=\log_{10}2^{53}\approx15.955.$$
Of course, I do understand that this estimation implies that at least $15$ decimal digits are guaranteed to be stored in this format. But the actual number is quite close to $16$, and it seems to me that in most cases we could somehow "extract" this extra digit hoping that it's somewhat correct.
But strictly speaking, what does it really mean that we have additional $0.955$ digits of precision? Does it mean that there're individual numbers which can't be stored with full $16$ digits of precision, but for most of the numbers the precision will be $16$ digits? Or does it just mean that we must use some particular rounding method to always come up with $16$ digits of precision? Or maybe something other?
Consider simpler example: 6-bit floating-point number. Its precision would be about $1.8$ decimal digits. This actually means that if we try to represent all the numbers possible for full 2-decimal-digit number, we'll get wrong results after rounding. For example, $8.2$ can't be represented quasi-exactly in such number: it has binary expansion as $\approx1000.001_2$, and rounding in any direction to get 6-bit value will give either $1000.00_2=8.0$ or $1000.01_2=8.25\to8.3$.
Thus the answer is that fractional number of digits of precision means that not all numbers can be represented in such a way that after rounding the floating-point representation of a number to the expected decimal digit we'd get the original number.