Is the whole number $0$ equal or unequal to the decimal number $0.0$, which can also represented as the mixed number $0 \frac{0}{10}$?
Please explain the reasons for the equality or inequality.
Is the whole number $0$ equal or unequal to the decimal number $0.0$, which can also represented as the mixed number $0 \frac{0}{10}$?
Please explain the reasons for the equality or inequality.
On
Here is a non-rigorous sort of argument you might use when first learning about fractions to argue that $0$ and $0.0$ are equal in the mathematical sense.
$0.0$ just means $0$ plus the quantity $\frac{0}{10}$, in the same way that $1.1$ means $1$ plus the quantity $\frac{1}{10}$ (commonly written as $1 \frac{1}{10}$ -- a mixed fraction).
So what is the quantity $\frac{0}{10}$? It's just $0$. So $0 + \frac{0}{10}$ is just $0 + 0$ which is $0$.
Dian Fishekqiu gave a better worded version of this type of argument in the comments under the question.
In ordinary mathematics, all representations of 0 are equivalent: $0=0.0=+0=-0$ and so on.
In computer programming, however, 0 may be different from 0.0, in that the former is an integer while the latter is a decimal (which may be floating-point or arbitrary precision). With the finite space we have to store a number, what is represented by decimal 0.0 may not actually be 0, but rather too small to represent with the precision given. Loss of significance can arise from this.
Floating-point formats can also have signed zero, where $-0.0$ is different from $+0.0$. The uses of these are niche, but the Wikipedia article does list a few.