As the title says, when I run in python
print((1 - (40 / 100 * 2)))
it gives 0.19999999999996 instead of 0.2.
Why?
I am using python 3.4.3 and ninja-IDE as editor.
As the title says, when I run in python
print((1 - (40 / 100 * 2)))
it gives 0.19999999999996 instead of 0.2.
Why?
I am using python 3.4.3 and ninja-IDE as editor.
On
This is due to the representation and precision issues of floating-point numbers in computer systems. Floating-point numbers are represented in binary rather than decimal in computers. Since binary cannot precisely represent certain decimal fractions, rounding errors can occur.
In your example, the result of 1 - (40 / 100 * 2) should be 0.2, but due to the precision limitations of floating-point numbers, the computed result may have a slight deviation and be displayed as 0.19999999999996.
If you need more accurate calculation results, you can consider using the Decimal type or rounding operations to handle the precision issues of floating-point numbers. For example, you can use the round() function to round the result:
print(round(1 - (40 / 100 * 2), 2))
This will approximate the result to two decimal places and output 0.2.
The round() function in Python is used to round a number to a specified decimal place or to the nearest whole number. It takes two arguments: the number to be rounded and the number of decimal places (or zero for rounding to the nearest whole number).
Here's an example to illustrate the usage of the round() function:
num1 = 3.14159
rounded_num1 = round(num1, 2)
print(rounded_num1)
In this example, the variable num1 stores the value 3.14159. By calling round(num1, 2), we round num1 to two decimal places. The result, assigned to rounded_num1, will be 3.14. The output will be:
3.14
Similarly, if we want to round a number to the nearest whole number, we can pass 0 as the second argument:
num2 = 2.7
rounded_num2 = round(num2)
print(rounded_num2)
In this case, the variable num2 stores the value 2.7. By calling round(num2), we round num2 to the nearest whole number. The result, assigned to rounded_num2, will be 3. The output will be:
3
The round() function is useful for controlling the precision of floating-point numbers or for obtaining whole numbers when needed in your Python programs.
The binary64 (or double-precision) floating-point format (Wikipedia), that most programming languages and math processing units use, consists of a particular subset of the rational numbers. For example, $1$ and $1/8$ are floating-point numbers, but $1/10$ or $\sqrt2$ aren't.
To be more specific, the floating-point numbers in the interval $[1,2)$ are precisely the $2^{52}$ evenly spaced numbers $$ 1 ,\quad 1+\varepsilon ,\quad 1+2 \varepsilon ,\quad 1+3 \varepsilon ,\quad \dots ,\quad 2 - 3 \varepsilon ,\quad 2 - 2 \varepsilon ,\quad 2 - \varepsilon , $$ where $$ \begin{aligned} \varepsilon &= 2^{-52} = \frac{1}{2^{52}} = \frac{1}{4503599627370496} \\ &= 0.00000\,00000\,00000\, 2220446049250313080847263336181640625 \end{aligned} $$ exactly, or $\varepsilon \approx 2.22 \times 10^{-16}$ approximately. So the next floating-point number after $1$ is $$ 1 + \varepsilon = 1.00000 \, 00000 \, 00000 \, 2220446049250313080847263336181640625 . $$ If you ask the computer to calculate this number, by typing
1 + 2.0 ** -52into a Python shell, it will get this result (exactly) internally, but it will lie a little and only print the approximation $1.0000000000000002$. And actually if you typeprint(1 + 2.0 ** -52)it will lie even more and print the value $1.0$. To get the computer to print the exact value, you can for example writeformat(1 + 2.0 ** -52, ".310g").The floating-point numbers in the interval $[\tfrac12,1)$ lie twice as densely spaced, with the distance $\frac{\varepsilon}{2}$, so they are precisely the $2^{52}$ evenly spaced numbers $$ \tfrac12 ,\quad \tfrac12 + \tfrac{\varepsilon}{2} ,\quad \tfrac12 + 2 \cdot \tfrac{\varepsilon}{2} ,\quad \tfrac12 + 3 \cdot \tfrac{\varepsilon}{2} ,\quad \dots ,\quad 1 - 3 \cdot \tfrac{\varepsilon}{2} ,\quad 1 - 2 \cdot \tfrac{\varepsilon}{2} ,\quad 1 - \tfrac{\varepsilon}{2} , $$ The number $0.8$ is not one of these numbers; the floating-point number closest to $0.8$ happens to be $$ \begin{aligned} & \tfrac12 + 2702159776422298 \cdot \tfrac{\varepsilon}{2} \\ & = 0.8000000000000000444089209850062616169452667236328125 , \end{aligned} $$ so this is the exact number that the computer will be working with when you type
0.8(or40/100*2). The difference between this number and $0.8$ is $$ \delta = 0.0000000000000000444089209850062616169452667236328125 \approx 4.44 \times 10^{-17} , $$ which turns out to be exactly four tenths of the distance between adjacent floating-point numbers in that interval, $\frac{\varepsilon}{2} \approx 1.11 \times 10^{-16}$.The floating-point numbers in the interval $[\tfrac14,\tfrac12)$ lie twice as dense again, with a spacing of $\varepsilon/4$, and in the interval $[\tfrac18,\tfrac14)$ the spacing is $\varepsilon/8$, and so on.
Now, when you type
1-0.8, the computer takes the floating-point numbers $1$ and $0.8+\delta$ and subtracts them, getting (exactly) the floating-point number $$ \begin{aligned} x &= 1 - (0.8+\delta) = 0.2-\delta \\& = 0.1999999999999999555910790149937383830547332763671875 , \end{aligned} $$ but when printing the result it lies a little and says $0.19999999999999996$. Why doesn't it say $0.2$? The answer is that $x$ is the floating-point number closest to $0.19999999999999996$, so if you were to continue the computation by typing0.19999999999999996it would use exactly the number $x$, just as if you had stored that value in a variable, whereas if you type0.2the computer would use the floating-point number closest to $0.2$, which is not $x$ but the floating-point number two notches above $x$, $$ x + 2 \cdot \tfrac{\varepsilon}{8} = 0.200000000000000011102230246251565404236316680908203125 . $$ (Remember that the spacing between floating-point numbers in the interval $[0.125, 0.25)$ is $\tfrac{\varepsilon}{8}$.) So the computer gives a simplified result which is good enough that you can recreate the actual (floating-point) result from it.On the other hand, the floating-point number closest to $0.4$ is $$ 0.4 + \frac{\delta}{2} = 0.40000000000000002220446049250313080847263336181640625 , $$ where the error is only half of the error for $0.8$, since $0.4$ lies in the interval $[0.25,0.5)$, where the floating-point numbers are twice as densely spaced as in the interval $[0.5,1)$ where $0.8$ lies. So when you type
1-0.4the computer will calculate the floating-point number $$ \begin{aligned} y &= 1 - (0.4 + \tfrac{\delta}{2}) = 0.6 - \tfrac{\delta}{2} \\& = 0.59999999999999997779553950749686919152736663818359375 , \end{aligned} $$ but as usual it lies a little and gives the simplified result $0.6$, and that's because in this case $y$ is actually the floating-point number closest to $0.6$.So to sum things up, when doing
1-0.8you (a) have a cruder approximation to the second term than when doing1-0.4, and (b) the result ends up in an interval where the floating-point numbers are more densely spaced, so that they can “pick up” that error.