Floating Point Precision Algorithm

28 Views Asked by At

In my database, data stored as a precision of 10 digits Decimal(30,10).

User can enter x or 1/x. I need to save in 1/x. If user enters 1310 it will be saved in database as 1/1310=0.0007633588. When I want to bring it back 1/0.0007633588=1309.999963 which is not 1310.

If I do the same calculation in Excel/Calculator applications, it always returns correct value (1310 in this case).

Excel example,

-------------------------------
| 3              |  3         |
| 0.333333333    | 1/3        |
| 3              | 1/0.333333 |
-------------------------------

Is there any algorithm to follow?