biased exponent vs unbiased exponent

2.1k Views Asked by At

Following IEEE-754, I am looking for an example that shows processing an unbiased floating point representation is harder than processing a biased one. All I see in the texts is that unbiased numbers have to be compared with a negative system representation, e.g. two's complement.

Why it is hard then? For integers, we use 2's complement. Why don't we use that for floating point numbers? More complex circuits are built nowadays....

1

There are 1 best solutions below

0
On

I think the main reason for such format is because the detection of special cases are easier with biased representation rather than unbiased. In terms of arithmetic it could be more complex (but not that much if compared with normal signed fixed point arithmetic) however (as example) detecting a 0 in floating point arithmetic is much easier with a biased exponent rather than the unbiased one (indeed in this case you just need to check that all the bits are 0, exponent and mantissa), with unbiased fixed point arithmetic for the exponent that would instead entails a comparison with a stored value probably. Same as for the overflow, infinity, not a number etc.

So you pay something extra for arithmetic (but the fixed point is not that expensive) but you gain much more in terms of exception detection.