I know that the cardinality of the sets of real numbers $(0, 1)$ and $(1, \infty)$ are equal.
So what is the fallacy in this argument?
For every real on $(0, 1)$, we can add any integer $n$ to it and get a number on $(1, \infty)$, with the number from the first set as the fractional part of our new number. However, this can be applied to every real on $(0, 1)$, with every integer greater than one... seeming to suggest a larger cardinality of the second set than the first.
Your argument shows that the cardinality of $(1,\infty)$ equals the cardinality of $(0,1)$ times the cardinality of the positive integers. In other words,
$$\mathfrak c=\mathfrak c\cdot\aleph_0$$
That is also known in other ways. The equation comes from the definition of the product of cardinal numbers.
The source of the "paradox" is that an infinite set can have the same cardinality as a proper subset. This cannot hold for finite sets but often does for infinite sets. The usual way to introduce that "paradox" is to show that the sets $\{1,2,3,\ldots\}$ and $\{0,1,2,3,\ldots\}$ have the same number of elements, though the second set has "one more" element than the first. For a dramatization of this, see Hilbert's Hotel.