I am trying to learn nonstandard analysis from Keisler's book. In the integration chapter it feels like the use of the Transfer Principle is some kind of magic that just requires us to believe results that seem hard to believe.
Let's look at the Example 3 on page 184. It's concerned with evaluating $\int\limits_a^b c \,dx$ where $c>0$. It starts by constructing the Riemann sum using a real $\Delta x$: $\sum\limits_a^b c \,\Delta x = c (b-a)$ and that seems totally fine. Then the Transfer Principle is invoked and it states that for an infinitesimal $dx$ this will hold:
$$\sum\limits_a^b c \,dx = c (b-a).$$
But wait. I could believe this if $c$ was infinitesimal, but this is supposed to hold for any $c>0$, including (hyper)real ones like $c=1$, isn't it?
To me it seems that this contradicts the rules laid out on page 31 in the first chapter. It states that that $b \cdot \epsilon$ (Greek letters for infinitesimals, Latin — for finite non-infinitesimals), so $c \, dx$ is infinitesimal. It also states that $\epsilon + \delta$ is infinitesimal, so a sum of these infinitesimals should also be infinitesimal. But it apparently isn't. How so? My impression from the first chapter was that no matter how much infinitesimals you add to $x_0$, you are still infinitely close to $x_0$ and you will never get to $x_1$, yet alone cover the whole $[a,b]$. Did I get that wrong? Or does this sum somehow work differently?